Hacker News new | comments | show | ask | jobs | submit login
Less is exponentially more (2012) (commandcenter.blogspot.de)
238 points by MrBuddyCasino on Sept 20, 2013 | hide | past | web | favorite | 273 comments

I find it amusing that there isn't a single word about runtime performance in the whole essay.

In the end, when I code in C++, it's because I believe my code will run faster and generally use less resources, and I'm ready to accept the sacrifices it takes (in terms of my own time) to get there. Does it make me someone for whom software is about "doing it in a certain way" ? Probably. If that certain way is "performance" for problems where it matters, I'm even proud of it.

C++ programmers don't come to Go because it turns out Go doesn't shine on problems where C++ was the answer. Simple as that, and there's nothing wrong with that, as long as Go found its niche and solves some problems relevant to him. I'm not sure I see why Rob Pike needs to be so dismissive.

What gets me about MOST of the stuff I read on criticising c++ is the complete disregard for runtime performance. I write software that has to consume billions of messages a day per instance. I've done it in several languages for different purposes. At the end of the day if you implement the same algorithm in c++ it will be faster. I don't care what you think you can do in node.js, but you are wrong. When you have limited resources there is no option save for c and c++. It isn't a big deal, that is currently how we get things done.

People fucking around with distributed web applications are so immune from the specific cost of adding a line of code because rarely is what they are writing executed repeatedly in a tight loop and thus they lose the ability to quickly measure performance impact

One programmer costs ~ 300 cores. (Bay Area salary with benefits vs. AWS EC2 pricing)

If your code consumes more than 300 cores, you should care about performance. If your code consumes less, you should care about productivity.

Adding cores is easy while adding programmers is hard. Cores scale linearly while programmers don't. So I set the guideline at 1000 cores. Do not make performance-productivity tradeoffs below that.

What on earth makes you think that all problems can be scaled out to as many cores as you like? This is exactly the web developer mindset that people are referring to elsewhere in this thread.

Well you're trading one kind of B.S. for another kind of B.S.

There's a lot of B.S. that comes with C++, and there's an entirely different kind of B.S. involved with writing things in Java + Hadoop.

Personally I stay out of the C/C++ ecosystem as much as I can because threads are never really going to work in the GNU world because you can't trust the standard libraries never mind all the other libraries.

The LMAX Disruptor shows that if you write code carefully in Java it can scream. They estimate that they could maybe get 10% better throughput in C++ at great cost, but the average C++ programmer would probably screw up the threading and make something buggy, and a C++ programmer that's 2 SD better than the mean would still struggle with cache line and other detailed CPU issues.

The difference between the LMAX Disruptor and the "genius grade" C++ I've seen is that the code for the Disruptor is simple and beautiful, whereas you might spend a week and a half just figuring out how to build a "genius grade" C++ program, taking half an hour each pop.

Really, you're trading execution speed for productivity, not "BS for BS" when you use these so-called "web languages". In some cases, there are other concerns such as memory usage or software environment (e.g. trying installing a Java program on a system than doesn't allow JIT compilations).

Some problems can scale out, but only if latency between nodes is low enough and bandwidth is high enough. For example, an MMO server would not function as well if there was a 50 msec ping between nodes. You may or may not have control over that depending on what cloud service you use.

These are real concerns and should not be trivialized as "BS for BS" or "throw more virtualized CPU cores at it". Every problem is different; it should be studied and the best solution for the problem applied.

I'm talking about parallel programming, in general, as a competitor to high-speed serial programming.

In that case it is a matter of one kind of BS (wondering why you don't get the same answer with the GPU that you do with the CPU, waiting 1.5 hours for your C++ program to build, etc.) vs another kind of BS (figuring out problems in parallel systems.)

Not all problems scale out like that, but you can pick the problems you work on.

Java performs well as long as you're CPU bound. But memory is becoming cheap enough to keep substantial parts of a database in memory. Avoiding all that IO translates into enormous performance gains. Unfortunately, in Java (using the Oracle VM) you can't keep a lot of data in memory without getting killed by garbage collector pauses.

The genius of disruptor was in the data structure and access mechanisms, plus the fact that it worked for single producer / single consumer circumstances. It is certainly not an example you can tout for how Java is as fast as C/C++ under all circumstances if you are 'careful'. I think you are just falling prey to confirmation bias w.r.t. 'beauty' of code.

Having said that, C++ can be ugly as hell.

I'd say in some real life situations the gap is less than people think.

Back in the 1990's, when JIT compilation was new, I wrote a very crude implementation of Monte Carlo integration in Java that wasn't quite fast enough to do the parameter scan I wanted. I rewrote the program in C and switched to a more efficient sampling scheme.

When it was all said and done, I was disappointed with the performance delta of the C code. Writing the more complex algorithm in Java would have been a better use of my time.

But there are several things java insists on that are going to cost you in performance in java that are very, very difficult to fix.

1) UTF16 strings. Ever notice how sticking to byte[] arrays (which is a pain in the ass) can double performance in java ? C++ supports everything by default. Latin1, UTF-8, UTF-16, UTF-32, ... with sane defaults, and supports the full set of string operations on all of them. I have a program that caches a lot of string data. The java version is complete, but uses >10G of memory, where the C++ version storing the same data uses <3G.

2) Pointers everywhere. Pointers, pointers and yet more pointers, and more than that still. So datastructures in java will never match their equivalent in C++ in lookup speeds. Plus, in C++ you can do intrusive datastructures (not pretty, but works), which really wipe the floor with Java's structures. If you intend to store objects with lots of subobjects, this will bit you. As this wasn't bad enough java objects feel the need to store metadata, whereas C++ objects pretty much are what you declared them to be (the overhead comes from malloc, not from the language), unless you declared virtual member functions, in which case there's one pointer in there. In Java, it may (Sadly) be worth it to not have one object contain another, but rather copy all fields from the contained object into the parent object. You lose the benefits of typing (esp. since using an interface for this will eliminate your gains), but it does accelerate things by keeping both things together in memory.

3) Startup time. It's much improved in java 6, and again in java 7, but it's nowhere near C++ startup time.

4) Getting in and out of java is expensive. (Whereas in C++, jumping from one application into a .dll or a .so is about as expensive as a virtual method call)

5) Bounds checks. On every single non-primitive memory access at least one bounds check is done. This is insane. "int[5] a; a[3] = 2;" is 2 assembly instructions in C++, almost 20 in java. More importantly, it's one memory access in C++, it's 2 in java (and that's ignoring the fact that java writes type information into the object too, if that were counted, it'd be far worse). Java still hasn't picked up on Coq's tricks (you prove, mathematically, what the bounds of a loop variable are, then you try to prove the array is at least that big. If that succeeds -> no bounds checks).

6) Memory usage, in general. I believe this is mostly a consequence of 1) and 2), but in general java apps use a crapload more memory than their C++ equivalents (normal programs, written by normal programmers)

7) You can't do things like "mmap this file and return me an array of ComplicatedObject[]" instances.

But yes, in raw number performance, avoiding all the above problems, java does match C++. There actually are (contrived) cases where java will beat C++. Normal C++ that is. In C++ you can write self-modifying code that can do the same optimizations a JIT can do, and can ignore safety (after proving to yourself what you're doing is actually safe, of course).

Of course java has the big advantage of having fewer surprises. But over time I tend to work on programs making this evolution : python/perl/matlab/mathematica -> java -> C++. Each transition will yield at least a factor 2 difference in performance, often more. Surprisingly the "java" phase tends to be the phase where new features are implemented, cause you can't beat Java's refactoring tools.

Pyton/Mathematica have the advantage that you can express many algorithms as an expression chain, which is really, really fast to change. "Get the results from database query X", get out fields x,y, and z, compare with other array this-and-that, sort the result, and get me the grouped counts of field b, and graph me a histogram of the result -> 1 or 2 lines of (not very readable) code. When designing a new program from scratch, you wouldn't believe how much time this saves. IPython notebook FTW !

Hadoop and the latest version of Lucene come with alternative implementations of strings that avoid the UTF16 tax.

Second, I've seen companies fall behind the competition because they had a tangled up C++ codebase with 1.5 hour compiles and code nobody really understand.

The trouble I see with Python, Mathematica and such is that people end up with a bunch of twisty little scripts that all look alike, you get no code reuse, nobody can figure out how to use each other's scripts, etc.

I've been working on making my Java frameworks more fluent because I can write maintainable code in Java and skip the 80% of the work to get the last 20% of the way there with scripts..

Try c++11

"What on earth makes you think that all problems can be scaled out to as many cores as you like"

It certainly can't. And for example you can't do that on an app that runs on a phone, for example.

But, when possible, this is the cheapest way to do it.

Not to mention other cases where it's the "only" way to do it (like, CPU heavy processing, video processing, simulations, etc). A smart developer can help with a %, but with limited returns.

For example, Facebook invested on their PHP compiler since their server usage would only increase, and the resources for that gain (in terms of people) are more or less constant.

Cores scale linearly while programmers don't.

It does not: http://en.wikipedia.org/wiki/Neil_J._Gunther#Universal_Law_o... and http://en.wikipedia.org/wiki/Amdahl%27s_law

Not only it does not but sometimes it doesn't scale at all. Scaling software is a very difficult problem, especially when you have low latency issues.

And what happens when you write desktop software? You ask your customers to open Amazon accounts?

So I set the guideline at 1000 cores. Do not make performance-productivity tradeoffs below that.

In which field do you work?

I'm sorry, but I can't outsource the computation of real time ultrasound denoising to EC2. Nor can I do the work of my LTE radio modem on EC2. Clouds and scaling out on clusters are great answer to a certain set of problems, but far from a panacea.

In that case, you are running on >1000 cores and my guideline gives the right answer.

I'm pretty sure ultrasound denoising does not run on > 10000 cores. The point was more of "cloud this, cloud that" solves many types of problems but real-time is not one of them. Cf: why you can't play a game on the cloud by sending JPG screenshots of the rendered scene 60 times per second while polling a joystick at 120 Hz and send that to the cloud.

I don't know why you bothered to provide a link and no commentary. This proves that people have attempted it, not that it is common nor that it produces equivalent results. I'm well aware that for some games, it can work "ok". Let me know when you get uncompressed 2K video at 60 FPS and <16 msec input response.

You suggested that it couldn't be done for games, so I gave you a link showing you that it can and has been done. There are number of reasons why that isn't a popular way to play games, but it's not primarily a technical issue.

You certainly could pull off this architecture in a setting with a good LAN connection. Though I'm not really arguing that it's necessarily a great way to go.

Yeesh, way to mince words. Yes, it "can" be done... I guess what I meant by "can't" was "provides a poor experience to the point where is generally unacceptable and therefore not a solution and hence the impossibility of the solution could be just as easily expressed as `can't be done [right now]`".

>There are number of reasons why that isn't a popular way to play games, but it's not primarily a technical issue.

It is entirely technical. Everyone who tried it shit all over it because it was a terrible experience, entirely due to limitations of internet connectivity (both latency and bandwidth).

I suppose the fact that many users have shitty internet connections is a technical issue. But for users with a high bandwidth low latency connection (FiOS, LAN, dedicated fiber) there's really no technical reason it can't work quite well.

The primary reasons these services didn't take off is that many users have shitty connections and almost everyone has a fast enough computer.

> One programmer costs ~ 300 cores. [...] If your code consumes less [than 300 cores], you should care about productivity. Adding cores is easy while adding programmers is hard. [...] Do not make performance-productivity tradeoffs below [1000 cores].

Computing time is cheap these days, but this kind of math doesn't make any more sense than comparing feet to miles per hour. Programming time saved is a one-time gain, whereas the performance loss in continuous. Let's say you write code for a single core, you spend 10 hours instead of 20 by accepting a 50% slowdown, and manage to compensate by adding an extra core. Depending on how long this code runs, there will be a point at which the ongoing costs of an extra core surpass the one-time saving of 10 hours' programming.

If all you need is a working prototype then sure, performance shortcuts may be worthwhile. (Although you can't calculate trade-offs as suggested). But for long-term production systems they will always start hurting at some point.

Unless you never make changes to your software, development time is as continuous as run time. As someone pointed out below, the fact that Google finds value in Go seems to point to there being enough of a cost in development time that they're willing to sacrifice run time to reduce it.

That said, what makes sense for Google doesn't necessarily make sense for the rest of us.

Yeah, as I said it depends on how long the code ends up running. For sections of the code that you end up changing all the time you'll have a much higher proportion of developer time to "running core" time, so you obviously can reduce costs more on the productivity side. But there's no simple, 300-cores-per-developer math for it.

Generally I'd say it depends on the component. There are tons of components that never change, at least in my apps. Splitting them off and moving them to java or C++ provides gigantic gains.

In practice, I think a lot of programmers simply don't know how to call C/C++ from python, even though it has become so much easier since ctypes. Thus doing this is derided as a waste of time, dangerous and whatever. You'll soon see that doing this has other advantages (like type safety).

Not all code is server-side code that can be addressed by elastic computing. Most C++ programmers work on desktop, mobile, and embedded programs. In such a domain, it is very likely that your code will be running on more than 10,000 cores on launch day (and with little or no intercommunication between the cores).

> One programmer costs ~ 300 cores. (Bay Area salary with benefits vs. AWS EC2 pricing)

That is gibberish. If I work at a company and we have 5 machines with 12 cores apiece already in place that is what I have to make do with. We don't all live in an elastic world.

Further to that the scaling of large single computations across cores is a costly and often pointless exercise.

Yeah, companies do that all the time. Spend 3 months of a good engineer's time to save a few thousand bucks in hardware because the budget allocation is fixed.

In the long run, I think companies that value human capital appropriately will win.

A lot of problems are going to require quite a bit of engineering to scale to 300 cores...

I mean, if you're just serving some simple webpage, it's easy to just throw servers at it. But if you want to implement say a distributed k-means, the algorithm is different than for the single-threaded case. Not everything is easily scalable...

Yeah, companies do that all the time. Spend hundreds of thousands of dollars on hosting and hardware costs to save a few hours of a programmer's time because they blindly believe in the completely unfounded "truth" you are parroting.

You're lucky.

Some places are so inelastic that developers get hand-me-down laptops from sales people who couldn't sell, or when they buy a new machine from Dell they get one with two cores.

So.... they pay you a salary right? So long as they don't prevent it... you can take some of the money your company allocates to salary, and reallocate it to buying a few cheap VMs.

I was working as an enumerator for the U.S. Census in the year 2000 and one of the people on my team realized that for 2 hours worth of pay she could buy office supplies that would save us all (and the government) 60 hours worth of work.

She was stressed because there wasn't any official channel for us to buy office supplies other than the stuff they sent us.

I told her to buy the office supplies and say that she worked another 2 hours; this was breaking the rules but this did not strike me as at all unethical.

Now, not long after the 2008 crunch I was getting pissed about how long builds took on my cheap laptop (on which I was running both the client and server sides of a complex app).

Getting a better machine from management was out of the question, but I liked other aspects of the job, and 2009 wasn't the best time to go job seeking. So I bought myself a top end desktop computer and three inexpensive monitors.

When I left that company they wanted to buy the machine off me so as to keep all the proprietary code and data on it, but as things worked out, the value of my own proprietary code and data on that machine was worth a lot to me so I kept it, and fortunately things never went to court.

This type of decision has risks (for instance, you don't want to be the guy who loses a machine with social security numbers on it and forces his employer to pay for credit monitoring for 70,000 people) but it can be the right thing to do sometimes.

I am surprised that they let you use your own machine. I am also surprised that they didn't get the code and data from you (or take you to court), as most employment contracts state that all work done by the employee is considered company property.

You could also contribute to the company's rent payment. It would have a similar wealth transfer effect.

If a company is making a profit, they owe more to you than you to them....

you're profitsharing, right? so if you go around the company's self limiting policies, and make more profit by using your salary... then it's a net win to you.

What if I'm distributing my code to thousands or millions of users who are going to be running it dozens of times a day? Suddenly, spending an hour to make it run just a few seconds faster looks like a very good trade-off.

Right tool for the job, etc etc

> Cores scale linearly while programmers don't.

Cores scale linearly, but performance may not scale linearly to cores.

Ok, sweet, so we just need to tell game developers to start packaging additional processors with their games.

You need to get out more, you've completely lost perspective.

And then how my players (I make games) use more cores?

They hire Amazon before playing my physics game?

Oh, I know, just put the game in the cloud...

Then players will become angry because the game only works when internet works and they cannot play at the underground train while commuting.

Sorry, but sometimes I have impression that web developers forget there is other sort of development that exists.

Depending on how you value things, being able to spend fewer clock cycles may be more "environmentally responsible" in the long run, though data centers do run a pretty tight ship. ;)

Setting aside the validity of this comparison, keep in mind that Go is coming out of Google, a place where running on 1000 cores is dinky.

(Note: I work for Google, but on open-source server software.)

This is going to become less true when/if power costs go up

>Cores scale linearly

You don't seriously think that do you? People struggle with concurrency and parallelism all the time. Tons of problems we have no way to scale up with more CPU cores, it is an open research problem to try to find ways to do so. Pretending that you can just buy fast is a huge mistake that costs millions of dollars.

The number of places in most code where you're running a tight loop a jillion times is vanishingly small. In those cases you're usually better off writing the vast majority of the code in a language that increases developer productivity while writing the occasional C/C++ module when serious single-threaded crunching is required. This is why so many modern higher level languages offer C bindings. The world isn't all or nothing. And this thought isn't specific to Go. The main language could just as easily be Java, Python, PHP, or even (eek) Ruby.

I think a lot people who are doing (successful) distributed web apps - are learning that performance actually does matter - and it can be painful to optimise poor architectural decisions.

People who 5 years ago were sermonising to me on premature optimisation are now telling me they're rewriting something in a faster language or how many millisecs they've saved using some new profiling snake oil they've just bought.

(whilst my background is in assembler/C, I now prefer to use prematurely optimised Ruby where possible just for the maintainability and the magic)

I think the lesson to learn is that you have to ensure you don't have barriers to optimization as well.

Knuth's original statement on premature optimization presupposed that the cost of optimizing later isn't really that much higher than optimizing now. I mean, obviously it's going to be somewhat harder - code is harder to read than to write, and fixing a performance failure is going to take longer than getting it right the first time. But still, that's not a humongous cost and there are so very many opportunities for optimization that chasing them without performance data is a waste of time.

But when you have an actual barrier to optimizing? Then you've got a real problem. If you're committed to a technology that has a hard performance ceiling and no easy way to break the slow parts out into another technology? That's a different problem.

It seems evident that Knuth's advice about premature optimization can be indiscriminately applied when you're talking about code for an individual component... but should be considered carefully against the potential costs when thinking about large architecture.

People who quote Knuth as aphorism are rarely as smart as Knuth. The optimization quote is great, but it's usually deployed as an appeal to authority to justify a preconceived, ideological decision -- God knows, I've done this.

Building software is hard, and it's not at all surprising (or reprehensible) that people will seek out shortcuts to winnow their decision space.

> The optimization quote is great, but it's usually deployed as an appeal to authority to justify a preconceived, ideological decision -- God knows, I've done this.

Most people I've known have used it as shorthand for "decisions based on optimization concerns require some justification that the optimization is important to the specific application and warrants the cost that come with it"; it's not an appeal to authority (indeed, the authority behind it is almost never referenced, just the bare quote), and its usually, IME, a defense against a preconceived, ideological decision (it works poorly to justify a preconceived, ideological decision, because it doesn't have carry much weight if the person proposing the optimization has any basis for making the claim that the proposed optimization isn't premature; as it shouldn't.)

I think the key to that quote is the agile mindset: the thing you think you're building might not be the thing that will actually meet the customer's need. So regular validation of a "good enough" product will be a better use of time than making the first prototype screamingly fast.

But optimizing later can also be the smarter thing to do. Especially when you're in the early stages, you don't know what direction your product/company will take. Optimization is a happier problem to have than say, trying to find enough people to sustain your business.

With no disrespect intended to the OP, this is an interesting comment considering that he came from Google, which is one of the world's largest beneficiaries of optimized performance. (If there's one place where run time trumps human time...)

I find it funny that when I had to switch to C++ I found it slow compared to C and Object-Oriented Pascal.

There are also some tradeoffs when switching languages. Some people would never make the move from C to C++ citing unnecessary complexity (e.g. Linus Torvald).

This is essentially a question of having the right tool for the job. Switching from C++ to Go is probably a good decision some cases and a bad one in others. But one should not remain so anchored in one language that won't be able to consider some other, probably better, option.

> "In the end, when I code in C++, it's because I believe my code will run faster"

Completely agreed and exactly my point. I am not moving to Go because is solving problems that I don't have.

Pike said: "That way of thinking just isn't the way Go operates. Zero cost isn't a goal, at least not zero CPU cost."

So Go's philosophy is to make some compromises for the sake of making programmer's life easier. C++ is the other way around, there is a cost the programmer has to pay in other to have code abstractions without impact the performance.

I don't think anyone "loves" to code in C++, but people tend to like the performance that C++ provides and are eager to pay the cost of programming in a monster language that no simple person fully understand.

I love coding in C++. The feeling of being so close to the metal that your actions are directly controlling the machine is something that was lost immediately when I moved to Java. It even more of a disconnect in higher level languages.

I have sort of the opposite feeling: I love that in C++ I can create very nice abstractions while being fairly confident that the compiler will transform them into code with C-like performance. That is, as long as you know what to avoid (namely: heap allocations and unnecessary copying).

This is the thing. This is what gets lost. C++ was designed with the premise that the language allows you to use it both in a C-like context and a Java-like context.

The problem is that some of the baggage of the former interferes with the latter. C++ is danged ugly in some places in ways that could be expressed cleanly in higher-level languages.

That's what C++ coders want - C++ with modules and real generics and fast compile-times and ways to cleanly implement and use garbage-collection if you so choose. A language that lets you say incredibly complicated and detailed things, and then lets you abstract them away into a library or a class or some other unit of organization that lets another coder use your stuff without absorbing the complexity.

All these "C++ replacements" assume that the solution is to gut out all the C-like stuff. What they should be looking to do is giving developers clean alternatives to the C-like stuff, including ways for the hardcore developers to provide their own clean alternatives to the C-like stuff. Don't give us garbage collection, give us the tools to write libraries that provide clean garbage collection to a developer.

>"This is what gets lost. C++ was designed with the premise that the language allows you to use it both in a C-like context and a Java-like context."

I got lost with the "Java-like context" thing. If I am not mistaken, by the time that C++ was designed there no language with a GB other than Lisp dialects. Jav a would appear like 12 years after, so I am not sure what you mean with Java-lik e context.

As for the design goals of C++, I think was actually to create a superset of C t hat allow to give abstraction facilities that C doesn't have (starting with OOP but right now way more than that)

If by GB you mean garbage collection.

Lisp, Smalltalk, Ada, Simula, ML, ...

Simula was the inspiration for C++ by the way.

Exactly. That is what is lost in most of these debates. In order to move forward in the design of systems programming languages, we must first recognize what makes C++ great, in spite of its deep flaws.

Replace C++ with assemblee and Java with C/C++ and the statement is still true. Funny how that works...

True, but append "while retaining access to my favourite high-level abstractions like classes, generic containers and algorithms" and now it is no longer true.

That's the difference: in C++ you get the abstractions -- just like in Java -- but you don't pay for them.

For the record: I used Java extensively in my previous job. It's a language I love, and I do think there are many areas where its speed is sufficient. But I'm also quite fond of C++, and there is this nice feeling of "woah, it doesn't really get faster than this, and it's still readable and elegant!" :)

>"That's the difference: in C++ you get the abstractions -- just like in Java -- but you don't pay for them"

That's not enterally true, you do pay for them, not in performance but in productivity. C++ is a complex language and even Alexandrescu touched this subject in Going Native 2013, he said something like "C++ is a language for experts"

And Java is a language for offshoring projects with lower skill developers.

Not bashing the language, just stating the sad state of affairs.

I used to really enjoy writing C++, but that feeling can be compared to doing a really big workout. It's a gratifying hurt.

> I don't think anyone "loves" to code in C++

I do. :)

The reason is four-fold:

* Performance doesn't matter for a large set of systems we write. * Performance is far more a product of the programmer than the language you use (within reason, naturally. Don't do Video decoders in Python :) * Large-scale systems where there is no single tight loop anywhere are way different beasts. Their performance stems largely from fast RTT in the edit-compile-test cycle and programmer productivity. Not from fast native code execution. * To some problems, latency or sustainability matters more than throughput.

> I find it amusing that there isn't a single word about runtime performance in the whole essay.

It most certainly is. In fact, it's the answer to the opening question, "Why does Go, a language designed from the ground up for what what C++ is used for, not attract more C++ programmers?" You might not have noticed it because he only refers to the performance issue as a "Zero (CPU) cost" mentality.

> C++ is about having it all there at your fingertips. I found this quote on a C++11 FAQ: > > The range of abstractions that C++ can express elegantly, flexibly, and at zero costs compared to hand-crafted specialized code has greatly increased. > > That way of thinking just isn't the way Go operates. Zero cost isn't a goal, at least not zero CPU cost. Go's claim is that minimizing programmer effort is a more important consideration.

There is a lot of crap in C++ which doesn't help with performance at all: class hierarchy with all the inheritance/friends/virtual/constructors/destructors, godawful syntax, template model, exception handling, 101 ways to do the same thing, lack of package/module handling etc. Rob Pike addresses those in the article.

Yes, Go is slower but it's not slower because of fixing those errors, it's slower because it introduces different features suitable for its niche (GC, reflection etc) Rust takes similar approach to avoid C++ design mistakes and doesn't sacrifice performance in doing so.

Those mistakes (mainly the whole "object model" thing) is what he is saying in the article. Pointing out performance of Go isn't really an argument against his point.

Templates do help with performance. Look up the performance characteristics of qsort (C) vs std::sort (templatized C++ sort).

I tested both extensively lately and the performance was exactly the same (under Intel Compiler and Visual Studio) so probably those compilers found a way to optimize it. They both are much slower than hand coded version anyway so it doesn't really matter (link to implementation which beats standard qsort/std::sort performance by 2x/3x times (at least on my data): http://www.ucw.cz/libucw/doc/sort.html).

If your only, or even main criteria was performance you would write in assembly language. The fact that you don't, means that Rob probably nails it with this comment:

"C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way."

That's absurd. The law of diminishing returns comes into play, and for most cases, assembly is just not worth it (whereas C or C++ is). Rust seems like another interesting take at that sweet spot, but it seems like in Go this was an afterthought (though to be fair, the performance benefits of C / C++ are often cargo-culted even when that kind of performance is completely irrelevant, so Pike still has some valid points).

the implication

  performance ⇒ C++
is correct but the commonly assumed implication

  C++ ⇒ performance
is not. Idiomatic Go can be faster than a mess of C++.

Someone mentioned a while ago how a common pattern of comments on programming forums is to take the most literal and uncharitable interpretation of a post, and reply to explain how this extreme view is mistaken.

I've never seen anyone defend the idea that writing something in C++ will necessarily make it high-performance and I would be very, very surprised if that's what the OP meant.

what a coincidence- I was thinking about this this A.M. I was conceptualizing a flowchart. Something like: 1. Observation 2. Massive overgeneralization of observation and posting to Internet. 3. Refutation of massive overgeneralization. 4. Insistence on narrow range of circumstances in which observation is true. 5. Announcement of extraordinary exceptions. 6. Anecdote in which even that exception was inadequate. 7. Call for advanced structural frameworks to encompass vastly differing perspectives. 8. Recollection of failed attempt at structure in 1976 (Optional: suggestion to live in geodesic homes). 9. Platitudes about how the details don't matter anyway. 10. Violent exasperation that details won't matter, including extreme hypothetical circumstances. 11. Shifting of blame to political figures and/or youth and their gadgets.

Yeah, idiomatic any language can be faster than a mess of Go.

Well written code in any language will almost always outperform poorly written code in another language. Python being 50x slower than C++ doesn't matter if you are using an exponential imlpementation of a linear problem.

The thing, IME, is that someone writing code in C++ is more likely to be aware that they are using an approach with suboptimal O. In a high-level language you can just as easily write a routine with bad O, especially if you don't understand the PL's underlying data structures. So then you get bad algorithm performance + bad Python performance.

That's what I meant. For most cases Go's performance over Python or even some messy C++ offers little use. But in some rare cases that we need to push the very limit of hardware, C++/C/assembly is more preferable than Go.

Sounds like Rust will be a better solution for those types of problems.

Exactly.. I was thinking of Rust as a compromise between performance and safety/productivity.

you should switch to Assembly then. you can get even better performance than C++, and hey, if you don't mind sacrificing your own time to make the program run faster, it seems like the right thing to do.

I'm going to fall for this trolling:

It's a trade-off. Assembly is an order of magnitude less productive, and it doesn't offer significant benefits. Outside of your critical path, you might actually hurt your runtime performance unless you really, really know your instruction set. When people say 'C trades speed for ease', they don't mean they want to design an ASIC to make their operation blazing fast. They mean C is at a sweet spot of developer productivity and execution time, for their specific application.

As much as the original statement was hyperbole, it's often overlooked by people that the arguments they make to explain their choices often lead to different results than they expect if actually followed.

If your work consists of writing extremely demanding code WRT performance, then it probably is useful to stop and think for certain projects, or portions of projects, whether it's worth going to assembly.

Similarly, if you find any utility in going lower in the language stack, it might be worth going higher for some projects or portions of projects.

Whether you actually use anything else is up to you, but blind adherence to a specific language is limiting, and may well be detrimental to the work you are trying to accomplish.

(you in this context is general, not applied to the parent specifically)

I was being snarky, but not trolling. I was very much trying to get at the same point you're getting at.

There is a trade-off between developer productivity and runtime performance. The author said that Go is appealing to people coming from a Python or Ruby background, which is a strong indicator that Go has fundamentally gotten the developer productivity side of the equation right.

So we should try and think about how the equation balances here. Is Go an order of magnitude slower than C++? Don't know. I think its not, though. My impression (which might be wrong, so please correct me if necessary) is that Go programs run approximately as fast as C++ programs that are written using C++ automatic memory features like smart pointers. Is Go an order of magnitude faster to develop than C++? Again, I'm not sure, but my perception is that it probably is. Python devs wouldn't be interested in it otherwise.

I agree with you, and some actual facts can be provided. Just look at the benchmark game between Go and C++ in quad-core x64: http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

In average, C++ is three times faster than Go in all the tests. These benchmarks aren't definitive but you can assume that in general the difference in performance is much less than an order of magnitude.

Now, look at the same comparison between Go and Python3: http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Python3 is in average 20 times slower than Go. And I love Python but Go isn't that much harder to use. It's quite a nice language, actually.

So, the exodus from Python to Go is very understandable, you gain a lot of performance without sacrificing much. And I think there's room for performance improvements in Go, perhaps in a couple of years C++ developers will make the transition. But I think that the real C++ killer is Rust. Time will tell.

Not at all. Go gives (a lot of) the advantages of python and not many of the costs. I would say that Go dominates python, giving the advantages of that language with few of the costs. Anything you can do in python can relatively easily be done in Go. Replacing dynamic typing with reflection seems to work pretty well.

It does not similarly dominate C++. C/C++ cannot be replaced by something like go for the very simple reason that it wouldn't work. Go itself depends on a C runtime and a C compiler written in C, even ignoring the operating system it has to run on (which also cannot run without a C/C++/assembly core). The same goes for languages like Java, Erlang, Haskell, ... (Java is particularly bad, since it has a huge complicated runtime which is almost completely C++)

After all, who will garbage-collect the garbage collectors ? (this is a simplification, the garbage collector is a major problem, but not the only one. There's dozens)

Two points:

* I read a lot of compiler-produced assembly -- these days it's often surprisingly clever. I don't think most developers could beat them with hand-coding even if they were willing to spend 10x-100x the time in development and debugging.

* As other people on the thread have mentioned: it's all about the memory. In a world where a cache miss is going to cost ~100 cycles the actual instruction stream isn't even your biggest concern -- programming for performance increasingly means counting cache lines. I can do that just as well with C++ code as I could with assembly, at least as long as the compiler provides things like prefetch primitives, etc.

There are still some super fast-path areas where expert assembly can beat a compiler (crypto algorithms, ...) but it just isn't attractive for most projects no matter how much developer time you're willing to spend.

Just depends on the problem -- people do use assembly all over the place when the performance speedup is warranted, for example video encoding (e.g. x264).

Even Go uses assembly in places -- for example, bytes.IndexByte.

The point is that C++ still has real performance benefits over Go, and it's not just used out of cussedness or backwards-thinking. (Though Go can be pretty close. I think it's a shame that didn't go w/ a simple-refcounter + weak references approach to GC.)

That simply isn't the case any more. The GCC and the LLVM can, for the majority of cases, write far faster assembly than the average assembly programmer. Naive C or C++ is faster than naive Go, but naive assembly is probable slower than even naive Go (the Go compiler produces optimised assembly, just not as well optimised as the assembly produced by C compilers).

Just an opinion and maybe even wrong. If you really care about performance you (must) care about control and you, in most extreme way, want C not C++.

To restate, C++ programmers are doing C++ because they are trapped in a corner world where they must suffer for speed. Go's architects think they've got a band of problems where ... you just don't have to suffer so much. "no, me, I still need to suffer?"

I strongly suspect that Rob Pike understands the importance of performance. a) he is old enough to respect resources b) Google programs need performance

"C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way.

The issue, then, is that Go's success would contradict their world view."

This, "they can't handle how awesome it is" is kind of a fun rationale and strikes me as plausible, but it would be more interesting to me to hear from some of the stellar c++ programmers at google as to why they aren't interested in switching.

C++ was the first language I really invested in mastering during and after college - it was like, "all I need to do is read these 5 books about how to avoid the pitfalls and use all the powerful tools, and it's amazing!" What really changed my mind was trying other languages (first python) and seeing how immediately productive I could be without so much as skimming through the "dive into python" website. So specifically, I'd be interested in hearing something from an expert C++ programmer who has really given go a try and decided it's not their cup of tea.

So specifically, I'd be interested in hearing something from an expert C++ programmer

I don't know what qualifies as an expert, but as someone who is using C++ and Java (and did look at Go):

- If I want a garbage collected language that is reasonably fast and safe, I prefer Java over Go, since there are many high-quality libraries available, there are good IDEs, it's easy to instrument the JVM (heck, I can even debug a remote application running in Tomcat in my IDE), and has a good central repository and package manger (JVM). Go is hardly more expressive than Java, so if I am looking for a replacement, it will probably be Kotlin or Scala.

- There are applications where I want manual memory management, auto-vectorization, real generics (templates), etc. Go has none of these features.

In other words: Go does not offer me any advantages over Java or C++. At the same time, moving from Java to Go would require me to sacrifice a lot of well-developed and reliable tools and libraries. I know that someone is going to mention Goroutines and channels. There are great concurrency options for Java as well, e.g. Akka, which implements actors, futures, a scheduler, remoting, and clustering. In comparison Go's concurrency offering is quite pale. Sure, you could implement something like Akka on top of Go, but it currently does not exist (except for a binding against Erlang's C node functionality).

The problem that I find with Java is that it forces you to write code in a certain way; OO programming seems to have become the de facto way of dealing with large problems, especially in enterprise and sometimes I think far too much time is given to design patterns (that do work in a /lot/ of cases) and how they can be shoe horned into the program you are making. Java does have some amazing features, libraries, etc. It also has a lot of security flaws (afaik). Also, IDE's really do not interest me in the slightest.

Writing in Go isn't as strict or structured as Java which I really enjoy. I used to write a lot of C++ and although I was never a pro, I did really used to enjoy programming in it. Go is certainly a breath of fresh air to develop in in comparison and I find that I can write more concisely and if anything performance seems to improve as I am not making trivial mistakes that "powerful" languages like C++ allow me to repeatedly make. KISS.

As always, it's horses for courses.

Java is deliberately simplistic because it prioritizes code maintainability (even by mediocre programmers), but I thought that was the same philosophy as go (e.g. limited type system). And I don't know what you're talking about wrt security.

Have you tried a modern, expressive, strongly typed functional language like Scala or Haskell? I can well believe that go would be an improvement over Java, but Java's not exactly state of the art these days.

I'm not sure I would call Java simplistic.. It seems incredibly bloated to me especially in comparison to Go and other languages.

With regards to security issues, I was under the impression that since oracle took it over, Java has had prevalent issues that have just been ignored by Oracle. Quick search seems to agree..

I have played with Haskell and really enjoyed the language. Although imperative languages are my bread and butter it certainly gave me a different approach to programming, primarily stepping away from an OO design and love of recursion!

I would call Java simplistic, but for a funny reason: with other languages, you can do things the straightforward way using basic constructs, or you can do things the expert way using a variety of more complex (sometimes user-defined) constructs. With Java, you have to do things the simple way. Sure, you have to define a class with a main method in it instead of just being able to print "Hello World", but there's one less way to do that in Java than in scripting languages.

Haskell is a lovely language and everyone should dabble in it for a while, for that very reason: because it makes you look at programming differently.

You're probably thinking of security issues with the browser plugin? They're... unfortunate, but don't affect the language as it's actually used. Don't write java applets, and as a user, don't install the browser plugin.

Indeed. Java in a sandbox in the browser is pretty much a failure. But there is no Go browser sandbox at all, so I'd hardly say Go is better off here. I guess you look better when you don't make the attempt at all.

>but Java's not exactly state of the art these days.

Nor was it when it was created. It was born outdated (just like go!).

Your comment really scared me when you talk about not liking strict or structured code. I maintain a lot of legacy code and one of the things that bother me the most about the C/C++ code I work on is the complete lack of organization. Because there's no strong conventions built into the language on how to organize things, people just sort of do it willy nilly. Wanting to do a serious refactor is a nightmare of grep and other crap. Java's packaging at least encourages some level of code organization and grouping of similar functionality. Could you organize things well in your C/C++ code? Of course. But it seems to not happen. That's not to say that all the Java code I've maintained has been well organized, but it's leaps and bounds better than the C/C++ stuff.

I'm not trying to suggest that /all/ code shouldn't be structured; it's all dependent on what you are writing. I have contributed to a decent sized C++ ftpd program (https://github.com/jawr/ebftpd) and it is well structured in my opinion. Go can be well structured too, it just seems to approach it in a different way. As I said before, it's horses for courses.

I program in C++ at Google. I won't make claims about "stellar". Here are a few Google-specific reasons Go hasn't replaced C++ for me:

* The long C++ build times he was complaining about are mostly gone due to distributed builds. http://google-engtools.blogspot.com/2011/09/build-in-cloud-d... Throwing resources at the problem isn't ideal, but it does work.

* Google has a lot of C++ library code, including client libraries for things like Bigtable. Go's Google-specific libraries are a work in progress.

* For serving systems, I care more about machine efficiency and latency for than about development time. I'm still skeptical of those properties in Go. Obviously the compiler (and my comfort with it) will improve over time. So will the garbage collector, to a point. It will always operate on one large heap like Java's. People say Java's GC works fine, but they spend a lot of time tuning it and then quote pause times that don't thrill me. I've seen a lot of problems caused by bad tuning parameters, like (for example) Java heap sizes that exceed the size of the container it's running in. And relatively hard-to-diagnose and slow-to-recover GC death spirals because the defaults suck: Java just bogs down in situations where a C++ binary would dump a heap sample, crash, and get restarted. So I'd be more comfortable with a system like Rust's: many smaller heaps, language enforcement of ownership rules when transferring between its equivalent of goroutines.

Now for internal tools, where speed of development is more important, I wish I could magically switch everything over to Go instantly. Some of that stuff is in Python. Python's fragile in an always-refactored large codebase without 100% unit test coverage. The library situation is worse than Go's (few Python-specific libraries, and I hate SWIG). And while performance is less important for internal tools, Python's horrible single-threaded performance and GIL are a real pain. And finally, using Go for internal tools might eventually give me the comfort I need to start using it for more serving systems.

Regarding the post and your reply: Rob's post was posted after the "distributed builds" blog post and it specifically says he was using distributed builds.

I'm a Google C++/Python programmer who has (mostly) switched to Go for new development. Rob's statements about C++/go compilation performance are still relatively correct. The client libraries are sufficient for many, but not all projects.

I think what most people don't appreciate is how cleverly the Go team at Google established a reasonable migration path for internal developers. Some systems have migrated far more quickly. hopefully, we'll see some papers describing the migration at some point.

The build system we have now is significantly better than what was available at the time he mentioned (September 2007). IIRC back then the local machine was a significant bottleneck in dispatching work, storing intermediate build products, and running tests. If you weren't around then, send me a mail from your work address and I'll tell you about when we had to walk uphill both ways in the snow.

Now, build times are not a major loss of productivity for me. In fairness, I don't work on the same system he was working on, and presumably not the same as you either.

Thanks, I misread the date in the presentation.

I started a year after that time.

Hi, Scott, Alejo here. :-)

My perspective is similar. I do a bit of Java, C++ and Python, and recently wrote a ~1K LOC backend in Go. I figured I'd share it in case someone finds it interesting.

For low-level infrastructure projects that don't have very complicated business logic (e.g. caching systems, gateways between different RPC systems, etc.) but which run in many machines and must handle many requests per second, I'm sticking with C++. It's sometimes painful to troubleshoot race conditions around resource destruction issues, given that I use lots of async calls and threads, but still ... I don't think I'd feel comfortable investing significant efforts building this kind of systems in Go or even Java. That said, I'm trying to gradually migrate away from callbacks spaghetti and data sharing between threads towards ... specialized threads and message passing, a model far more similar to Go's. But I prefer C++ over Go and Java because it feels more predictable and, barring bugs, seems to have a lower operational cost. I don't want to imagine things like Bigtable or Google's other storage systems written in anything other than C++.

For higher-level backends with more business logic but which still must handle many requests per second, I lean towards Java. I feel that Java brings a few headaches for the production engineers that for some reason happen less in C++, such as having to tune the GC for performance and latency. Systems in C++ feel significantly more predictable, from my production perspective. But I'm willing to lose that predictability and switch to Java for the gains for the development team. I'm not sure I would be ready to switch to Go: I perceive the state of internal libraries on Go as trailing significantly behind those in Java. I guess this may change (or my perception may even be already dated).

Finally, for internal servers with little load, I'd already pick Go over Python. I come from a strong Scheme background and originally really liked Python, a very nice subset of Scheme. However, I'm coming to find projects in Python relatively hard to maintain. :-/ I feel that when Python projects hit a certain size (when a program reaches something like ~20 non-trivial classes, with some inheritance or interfaces, something like that), it becomes hard for new comers to extend it successfully. It may be just me, but I find that the lack of a canonical way to do things like interfaces means everyone has their own adhoc way to do this and ... things just get messy, or at least more messy, faster, than they do in the other 3 Google languages. It could also be the lack of type checks at compile time, I'm not sure. But I digress, all I meant to say is that I think Go may actually make things a lot better than Python in this regard, so I'm starting to use it in this area.

Hey Alejo!

You'd know more about Java's strengths than I would. I haven't really touched it since we parted teams. I just remember production pain...

I'm sure the Go library situation will improve, and we can nudge that along as we write internal tools. I'll definitely start my next one in Go. I might even translate my last one to Go sometime.

I don't think it takes a C++ expert to point out that the more plausible explanation is simply that Rob Pike is a narcissist.

This is the same guy who said you don't need to know what I know (re: endianness) because you will never write a compiler. Consummate narcissism.

Someone I know at google said he was once talking to Rob Pike about go and asking questions about some of the design decisions. After answering a few questions, Pike looked at him and said "Do you know who I am?". (supposedly)

I once questioned the design decisions for a particular project of the guy who made them, and his response was "who the do you think you are?"

"Say my name".

Isn't that elitism?

Narcissism: I am the most beautiful of all.

Elitism: I am more beautiful than you.

Isn't that pedantry? ;)

No need for personal attacks.

You're right, and my point wasn't to attack his character but to explain the nonsense coming out of his mouth. There is a clear and simple explanation, consistent with his history, that is (perhaps unfortunately) not flattering.

However, he has a history of being insulting towards people who want to use different paradigms. So if you're going to bring that line up with anyone, bring it up with Rob Pike himself.

He does come across as an academic type. Did not say anything about accompanying frameworks/libraries, you know the actual practical stuff.

> "I'd be interested in hearing something from an expert C++ programmer who has really given go a try and decided it's not their cup of tea."

I am not a C++ expert but I have programmed a fairly amount of C++ code so I will give you my two cents.

I use C++ because a blend of three things:

* Performance,

* Abstraction and

* Portability.

If I would care just in a subset of that tuple I probably will code in something else.

For instance, if for some project performance is not an issue, I will probably use a higher level language like Python or Scheme. If I am feeling that Abstraction is not as important as performance and portability I will use C.

When I care about the three of them I feel there are not too many options.

I agree, in the games industry we use C++ for a few reasons:

* Performance

* Portability

* Direct API access (D3D, OpenGL, console specific APIs)

* Large legacy codebases

* Control over memory management

Go doesn't really address all these areas adequately at the moment from what I've seen. For me personally, for those problem domains where the above are not a concern I find other languages more compelling than Go - Haskell, F#, even C#.

It seems likely that a compiled functional language like OCaml or Haskell would do better overall - maybe a slight sacrifice in performance, but much more abstraction available, and greater portability since more of the library is standardized.

Not to mention benefits of being strongly-typed.

I'm not an expert but in the gamedev domain, control over memory is fairly vital. It seems like it would be for lots of the other stuff C++ is used with too. In C++ I can allocate all my memory upfront in a pool since I know the exact size of my level, the number of objects and so on. Then use custom allocators/static for just about everything. When I make an object at runtime I can just snag preallocated space from the pool. With the ability to cast the pool's primitive data types to pointers I can even save on any need to index the blocks since I can use the unused memory to point to the next block of unused memory (although it's probably a minor optimization).

Go drops that control totally in favour of it's inbuilt garbage collector which the Go devs think they can just get right. That seems unlikely (the current implementation is apparently quite bad stop-the-world implementation).

Another issue that strikes me is library building. Afaik Go doesn't produce object that can be linked against by non-go stuff. C does this, it has a ABI.

This means I can write one library in C/C++ (and apparently Rust) and then have wrappers for just about every other programming language in existence. (Although C++ can make this painful with things like exceptions http://250bpm.com/blog:4 ).

It might be that Go's interfaces make it really useful for making Go libraries in, but some libraries need to be language agnostic as much as possible.

Many of the things Go initially solved over C++ are being chipped away too. I love reflection, it would be very useful for games, serialization and so on. C++ doesn't have it but there is a standards working group looking at it now, so we could see it in either C++14 in a year, or C++17 (probably with implementations before then). C++11 got threads and so on and there is a working group doing transactional memory, more threading stuff, networking and so on. So we could see something like Gorutines and Channels but still have access to low level things like raw memory barriers. C++ tooling is set to explode with what the clang people are up to.

Go seems great, but it does seem focused in the 'business' kind of domain. Maybe future versions could address some of the issues like the GC (either fixing it so it does meet the performance requirements, or allowing custom memory options, C# has the unsafe keyword for example).

EDIT: I note that Go might provide a package "unsafe" that could allow for some things like a custom GC but apparently would be hard to implement.

  >in the gamedev domain, control over memory is fairly vital.
C# is taking over the game world. Most iOS games are, for instance, written in C# these days.

I have worked for a company who wrote a state-of-the-art 3D PC gaming engine in C#.

Being garbage collected doesn't remove the ability to manage memory. It just removes the need to call Malloc directly.

The fact that Unity uses C# as one of it's scripting engines doesn't mean that C# is taking over the game world (I'll assume that's where the C# reference is from - since Unity is incredibly popular for iOS devs). The vast majority of games are still C++, often scripted in Lua, Python, Javascript and C#.

I wouldn't describe Unity's relationship with C# as 'scripting'.

I realise that there are no doubt a number of custom C++ engines out there with embedded scripting languages but push for multi-platform mobile games has caused an huge shift towards Unity.

Maybe I'm moving in the wrong circled here — but anecdotally all the people I know making iOS & Android games are developing in Unity.

From the horse's mouth: http://answers.unity3d.com/questions/9675/is-unity-engine-wr...

And I don't doubt that alot of devs use Unity, it's most likely the most popular game engine. But it's still a C++ project, despite the languages used to script game events...


> "C# is taking over the game world. Most iOS games are, for instance, written in C# these days."

When the engine underneath is done in C# as well I will take that argument. So far, it would be the same that saying that most console games are done in UnrealScript.

Most iOS games aren't C# now a days. Unity likes to trot out the 70% number for mobile games but it's mostly bs. That isn't counting games that are actually published that based on there registered numbers, a lot of of never actually make it into the store. I'm willing to bet the most of iOS games are still Obj-C

I'd take that bet.

I know many iOS game developers and none of them use Objective-C any more.

(in fact I only ever knew one)

> state-of-the-art 3D PC gaming engine in C#.

How does it compare to Cry Engine, Unreal Engine, Frostbite, ...?

Because these are the state of the art gaming engines these days.

It compared surprisingly favourably. The C# engine was comparable (and regularly compared) to the Unreal Engine being used elsewhere in the company.

It was C# from the ground up. Scripting was provided via other .Net languages (Python in the case of the UI team).

Those are state of the art graphics gaming engines. It's quite hard to make e.g. an RTS using any of those. They are too FPS specific.

Command & Conquer is being built on Frostbite. [1]

[1] http://en.wikipedia.org/wiki/Command_%26_Conquer_(2013_video...

Are you referring to Unity? If so, while the game code is run via Mono, it is C++ internally.

Which is just some machine code eventually. It's just electric signals people!

Are you SURE about that?

Well, for shitty games maybe it is true, shovelware aplenty on iOS anyway.

But for the part of the market that actually has a profit (not just revenue) C# is not that dominant.

In terms of profit, the most profitable games probably still rely on C++.

Also most engines (including Unity) are made in C or C++, they sorta abstract the memory management for the game author, but they still do a lot of manual fiddling themselves, XNA for example (that allow you to write C# games directly to the hardware without a C++ engine) is notorious for bigger games having memory issues.

Also 2D "retina" games use so much absurd amounts of memory, that they are almost impossible to do purely in high-level languages, you need some C or C++ somewhere to handle them, at least texture loading, otherwise you end with loading times so big that people want to suicide.

Loading times don't have anything to do with the language.

Loading times have been an issue since game consoles moved away from cartridges.

I'm interested in knowing more about that engine. Can you say who/what it is?

Realtime Worlds

Ah, yes. Out of Boulder?

Which game's engine was in C#? Project MyWorld?

Yes. But out of Dundee, Scotland.

Memory management is extremely important for high-performance code. And C/C++ allow you complete control over it, but obviously with extra work needed to do this. But when you need the speed, it's so worth it, it's not funny.

On top of this, things like bit-packing and tagged-pointer type things can allow you to save memory, allowing you to fit more stuff into cachelines more efficiently, giving even more speedups.

I am an adequate C programmer, and with some care, I can turn out code that seems to be reliable. I am not an C++ expert, but at one time I did want to obtain mastery over it as well. I ran into problems.

There was some recent discussion about the "Edward C++ Hands" article. [1] And that discussion summarizes neatly the issues I have with C++. Yes, I get that it is a powerful multi-paradigm language. Yes, I get that it will take time to master.

But should this language (or any language; I'm looking at you Javascript) have so many hotly-debated dark corners and pitfalls? It is an enormous undertaking to just figure out what the best practices are. And it is then a further enormous undertaking to get the rest of your team on-board.

[1] https://news.ycombinator.com/item?id=6414162

> C++ was the first language I really invested in mastering during and after college

This is partially why you picked up Python so easily. It's one thing to approach a language like Python as your first, but if you took the dive and learned c/c++ prior, everything else is going to be much easier to grok.

Yeah good point, impossible to forget what it's like not to know something. I also haven't done systems programming in a long time - so it would really be more fair to have someone comment who is considering a language under higher performance constraints.

That said, I can report on what I prefer right now, knowing what I already know (having spent 4 years programming java professionally as well) - and I truly prefer a more functional approach that comes a lot more easily in languages that support closures and first class functions. It also strikes me that many of the best practices in java / c++ are basically moving towards more functional programming (prefer composition to inheritance, make objects immutable by default, interfaces over classes when possible, dependency injection / referential transparency and parameterizing methods by interfaces e.g functions).

I've been programming in C/C++ for 15+ years and I enjoy it. Personally I think the C++11 extensions are not useful and could bloat/kill the language. It feels like they are trying to make it a scripting language, which isn't why people use C++.

However, I recently started using Python and I absolutely love it. Using python, I can usually whip up a proof of concept within minutes because things just work. I wrote a OCR program using a mix of OpenCV and Tesseract in Python, and I was able to get a proof of concept up in about an hour. Of course, the performance isn't nearly what I want it to be (on the order of seconds, which is too slow for my application), but the fact I could get something working so quickly is why I love Python so much.

If C++ programmers were attracted to making their language more elegant and safe yet more powerful at the same time they would have switched to any of the statically compiled, typesafe, managed languages years ago.

If Rob Pike was doing things in C++ that Go could also solve, that means he could've done them in C# or in Haskell too, which means he shouldn't have been doing them in C++ in the first place. (Strousup explains why in his recent GN keynote if you doubt that)

Of course solving that problem by inventing a new interesting language without any of the (psychologic) baggage those other languages is awesome. It seems to be rather succeful, but it's naieve to think that it would attract C++ programmers, there's just nothing in there for them. The C++ programmers who are looking for change have more to hope for in for example Rust.

> If C++ programmers were attracted to making their language more elegant and safe yet more powerful at the same time they would have switched to any of the statically compiled, typesafe, managed languages years ago.

All of those foist garbage collection on you. Many of them find text-based macros such as C++'s template engine morally offensive and so they don't include that feature.

If you want a statically-typed Python, then Scala and C# and the like will serve your needs nicely.

If you want a modernized C++... it still doesn't exist. Rust looks promising, but that's not exactly mature technology.

Its also completely subjective to call other languages more powerful or elegant. For example, I think Haskell's lambda is less elegant than C++'s.

Haskell's lambdas are extremely elegant (\args... -> code, and treated no differently to any other function), but C++ has a pile of other considerations that Haskell doesn't have to worry about, due to Haskell's uniform representation and garbage collection.

I think that functions and closures are fundamentally different types. Haskell does implicit type erasure whereas C++ does not. Most of the time in C++ when you use lambda you don't need uniformity and you are better off using templates to pass them around. For the rare circumstances you do need this, you just stick it in a std::function. Haskell also has mechanisms that could handle this: type classes/parametric polymorphism/existential types and I would find this a much cleaner approach, but it would wreak havoc with other design decisions in the language such as currying. My main point of this is that Haskell proponents often think of C++ programmers to be unenlightened or wilfully ignorant, when its actually very possible they have thought things out and simply reached different conclusions.

There was probably a political reason as well. Influencing developers and attracting them to your platform via a new language.

"C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. ... The issue, then, is that Go's success would contradict their world view."

No it won't! I'm excited about Go. What I've read about it makes it look like a language I would love to work in, some day. However, I don't think the problems I'm solving now are the problems Go is good at solving. The programs I'm writing run on one machine with restricted memory, and they need to be highly, highly performant. The execution time is measured in tens of milliseconds, and during that time we do a lot mathematics and processing. SIMD is our close, beloved friend. In that context, I don't think Go is an appropriate language, due to its garbage collector, its lack of precise control over where objects go in memory, and the overhead of method calls. But if I ever write code similar to that written by Google, C++ would be a terrible choice.

I was all on board with what he was saying until that last little jab. I'm not migrating to Go because I have problems that Go doesn't set out to solve, but I'm not threatened by Go, because it's just another tool. Having more high-quality, carefully-designed tools is never a bad thing, and I can't imagine ever being threatened by a new tool's success.

Honestly, this smacks of more divisive bullshit. Pythonistas vs Rubyists, VIM vs Emacs, and now, Go vs C++. Blurgh.

So if your point is a technical one, whats the main issue for you - that you can't emit hand-tuned SIMD code (I have no idea if thats possible) or that you can't do realtime because of garbage collection? I acknowledge that the go compiler has some catching up to do to become as fast as c++.

I can't speak for him, but the issues that make Go not interesting at all to me (and I believe, to a lot of other C++ coders) are automatic GC, lack of generics and lack of template metaprogramming.

The thing with Go is that it is (was?) marketed as a system's programming language. Go might work as a replacement for Java, but Java is not for system programming.

Whoever decided to say Go was capable of it either has no idea what systems programming means or was being deceitful. Go can not replace C++; these languages don't solve the same problems.

I always say, all of these "replacements for C++" don't feel like C++, they feel like statically-typed Python. Which is great, if you want statically-typed Python. There's a lot of fantastic uses for statically-typed Python.

But they're not C++. As soon as you tie the whole language to a garbage-collector and leave out any real mechanism for metaprogramming or even simple generics, you've lost me.

Pure text-based lexical macros like templates are a weapon of last-resort for language design... but it's not an optional one. Dynamic-typed languages cheat by offering a simple "exec" statement/function. Static languages have a religious objection to offering a first-party preprocessor, because they have the hubris to believe they can solve every possible problem ever within their language.

Nobody is that good at programming language design. Not Pike, not Gosling, not Ritchie, not anybody. The difference is that Ritchie's team knew that, and didn't try and pretend that every single thing could be easily and performantly expressed in their language, and made the hackish ugly macro system to solve the edge cases... and it worked.

I think Rust stands a chance as a C/C++ replacement: manual low-level memory management, templates, a powerful macro system and type-safeness.

Rust is specifically aimed at "frustrated C++ developers": http://www.infoq.com/news/2012/08/Interview-Rust.

"templates, a powerful macro system"

I really like Rust, but I haven't been paying attention for a month or so; did something change? Templates? And the macro system was...oddly limited when I tried fooling with it.

Rust consciously avoids having C++-style templates, so I think stelonix is referring to plain old generics.

^^well expressed stelonix.

This topic/article should have been re-titled, "Rob Pike on Why Go continues to be incorrectly marketed to systems programmers".

Go is not a systems programming language, since it gives barely a passing nod to control. Thus, it does not fall in the same core domains of C++ or C.

Yeah. You should be to write a garbage collector in a C++ replacement, not consume one.

> lack of template metaprogramming

What do you use template metaprogramming for, except to compensate C++'s flaws?

(I agree with you on automatic GC and lack of generics, but not on C++'s template model in general and especially not template metaprogramming. TMP is just the most horrible, hackish way of scripting your compiler, and I've rarely, if ever found a use for it.)

As best I can determine, what you've just said is exactly how the article characterises the criticism of Go from C++ programmers.

The article's argument, however, is that C++ programmers are wrong to presume that those things are required for systems programming.


Although we can wonder about how system's programming would work when you use managed languages, the reality is that as of today (2013), system's programming means low-level memory management, an almost transparent translation from code to its lower-level counterpart (which, in 2013's system's programming jargon, means the target architecture's machine code) and most importantly having as much control as possible over what your code actually does. Having garbage collection as a language feature takes away control, and system programmers won't be able to do their job with it.

It's fine to think of GC as "fast enough" in 2013, but that's when you're working on things were speed/space isn't scarce. Go might work out for Google, since their "systems" comprise of tens of thousands of computers, where reliability and maintainability are more important than latency. For the rest of us C++ coders, we'll probably won't be able to replace C++ with Go for the tasks where we need to do systems programming, simply because we on things that require us to be control freaks.

Quite frankly, this post by Rob Pike saddened me, because it seemed like he was desperately trying to counter the Go criticism with a not very well thought out rant.

It can be a systems programming language on pretend/virtual systems where things are make-believe, but not on real, physical hardware.

Both of those are problems for me. The garbage collection is probably the larger one, though, since I'm sure one could add explicit, hand-tuned SIMD into any language (auto-vectorization, while useful in many contexts, just doesn't cut it here).

The main thing is that the garbage collector is a problem in time-constrained and memory-constrained environments. I can't afford to wait to clean up my large data set until the GC decides to, and I can't afford to wait the 100ms that the GC may take. Or, more accurately, I might be able to, but I just don't know, and that lack of certainty means that a GC language is not an option for me.

But that's not to say that Go suxxors! Different solutions to different problems. I'm just saying why I won't be using Go to work on the project I'm on now.

This. The nondeterminism induced by an uncontrollable garbage collector is unacceptable given hard realtime requirements.

Well, he admits he was never very at ease with C++, so I guess he can be forgiven for not understanding very few C++ programmers are going to jump into a garbage collected language. If they were, they would have moved long ago. Most of the times you need C++ nowadays are because a GC language simply won't do.

Kind of silly he drew up his language based on one Google need and thinks everyone using C++ should just jump aboard, though. He acts like he studied why people were writing in C++ and took their use cases into account instead of just his own. Kind of a stereotypical engineer who writes an app for themselves that utterly fails in the consumer market because it is written by coders for coders with weird controls.

Even just reading any of the dozens of long discussions about adding GC to C++ might have been enlightening. It's not like this has not been discussed over and over and over over the last twenty years.

The point being that in any discussion about new C++ features the central point always tends to devolve to:

What is the cost to me if I don't use this feature in my programs?

If I don't use exceptions, I don't pay for them. If I don't use virtual member functions, I don't pay for them. If I don't use RTTI, I don't pay for it.

In other words: If you want/expect C++ programmers to switch to your language, don't make people pay for what they don't want to use.

Support for plugging in a GC would not violate that, but enabling/requiring a GC by default most certainly does.

People use C++ in all sorts of places where having true garbage collection would be better. For example, it's likely used to write the browser you are reading this message on, and possibly the entire OS.

Since far better languages than Go have existed for many years, and those have failed so far to attract many C++ programmers, I just think this is never going to happen until C++ programmers themselves die out.

Are you implying that web browsers and Operating Systems are places that would be better off with garbage collection?

Using GC in C++ is rather simple, just use a GC-powered allocator and you don't need to call `delete` anymore. But GC just doesn't matter so much. Many forget-to-delete or delete-too-early problems can be solved by memory analyzers. There are actually very few bugs that adding a GC can solve.

This is why I said "true" garbage collection. Conservative GC ain't real GC. GC isn't about eliminating bugs -- use valgrind for that -- it's about increasing programmer productivity, along with all the other features missing from C++ but present in modern strict strong-typed functional languages.

Conservative GC is real GC, it's just the tracing set happens to be the stack. And I don't see why you can't use non-conservative GC for C++ allocators. It only requires some work for read/write barriers.

GC is all about using `malloc/new` without thinking `free/delete`. With modern C and C++ there are a lot of mechanisms for that (passing values, auto_ptr and shared_ptr for example). In many cases a C++ programmer doesn't need to think about `free/delete` either, and in some cases the non-determinism of GC decreases the productivity too.

One of the major (the only?) reason I still write C++ code is the amount of libraries available. I'm writing computer vision stuff and all the important libraries (OpenCV, PCL, nonlinear optimization stuff) are in C++...

Sure there are bindings, but they are often incomplete and not quite stable.

I will switch the day a language allows me to transparently call my existing C++ library, without having to spend time writing and debugging swig/boost::python/whatever wrappers.

Just because some people are using C++ in projects where another language would be better, it does not follow all projects using C++ would be in another language. I have some projects I would hate to do in C++, but I also have projects I would hate to do in another language.

To quote an old cliche, right tool for the job.

For those who don't know who Rob Pike is, he's pretty awesome, helped create UTF-8 among other things, and now works on Go.


He has given some really great Go talks which are available on youtube, about an hour each:

Origins of Go Concurrency Style - http://www.youtube.com/watch?v=3DtUzH3zoFo

Lexical Scanning in Go - http://www.youtube.com/watch?v=HxaD_trXwRE

Go Concurrency Patterns (One of my favorites) - http://www.youtube.com/watch?v=f6kdp27TYZs

He also may have invented Not Invented Here Syndrome. - http://en.wikipedia.org/wiki/Not_invented_here

(NIHS was certainly invented at Bell Labs' CSRG, I'm just not sure of the exact timeline.)

Actually I've looked at Go and it seems like a really shoddy step forward (as opposed to say D), but thanks for being so condescending about it.

It always amazes me when people try to somehow empirically prove that one programming language is better than another. Sure, it's an interesting question to consider as a thought-exercise, but is there really anything valuable to be gained from expanding on this any further than that?

I've always though that, other than objective measurements of binary code performance, whether or not you think a programming language is good is almost entirely determined by its syntax and feature set. I feel no need to go around the internet all day proclaiming my programming language of choice to be the best and/or others to be inferior.

This reminds me of that link I saw on here a while ago titled "Why Go? Use Racket!". Personally, my answer would be: because I find 9000 brackets in a simple program to be poorly readable. However, I respect anyone who disagrees and thinks the opposite. If it allows them to make great software and have fun doing it: by all means go for it!

You apparently thought the write-up was somehow overly condescending to C++ programmers (I assume), while I personally thought it was the complete opposite.

The tone of article implied that it is somehow expected that C++/Java developers would come to Go. Well, no - there are choices.

That tone is implied because that's what they thought when they designed the language. He specifically says that exact thing... They were trying to make development of the kind of projects C++ was used for easier and as such it was a reasonable assumption that they would draw users of C++ to Go. He also says it didn't really end up that way and explains why.

We are all waiting for Mozilla's rust to mature. Rust seems more in-line with C++'s "you only pay for features you use" and more general purpose.

I wouldn't use go for numerical simulations but it certainly shines for writing network servers.

There is one feature of go that prevents me for considering it when I want to write "low level" code: garbage collection.

It's probably very silly but in my mind that puts it next to Java, above C and C++ in the "grand scale of programming languages".

Oh, and also I don't like garbage collection, so there's that. I'm sure some day someone will explain to me what's the point of it because I still don't get it.

I think you should be given a choice whether to use GC. Now that's a killer feature.

Besides the practical issues of using GCs, I don't get them at a conceptual level.

Keeping track of resources is a big part of any software development. You have keep track of open files, user sessions, network connections, memory mappings, all kinds of caches, many other things... and allocated memory. And for some reason this last kind of resource we don't want to keep track of "manually", instead we expect the computer to do that work for us. Why should we handle dynamically allocated memory as a special case of resource management?

Don't get me wrong, I love having my computer do my work for me. I expect my compiler to be able to inline my code for me, optimize my arithmetic, unroll my loops and do all kinds of stuff for me. Stuff it's good at, stuff it can do at least as well as I can and probably much better. Memory management is not one of those things IMO.

I feel like memory management is like pointers in C, it's a mythical dragon that scares those who have never used them beforehand. It's really not that hard or that much of an issue in the real world. If you really can't keep track of your memory allocations then the solution is not to add a GC, you have a major architectural problem.

That's nearly impossible to do well. The second you decide you want to use a library that relies on GC, you need to turn on GC for your whole app.

There's no way to make that a local decision that I know of.

I don't know much about Rust, but they do promise "optional task-local GC, safe pointer types with region analysis".

So, while I'm far from an expert on Rust, that seems to me like they have separate heaps for separate tasks, and heap values are not shared. So, effectively, the tasks are roughly as isolated -- memorywise, at least -- as different OS level processes.

This means that you can have different tasks that have GC turned on or off at the task level, because a GC'ed object will never "leak" across a process boundary. However, if you decide you want to parse JSON with your fancy new GC-using JSON library, you either need to put that into a separate task, or you need to enable GC in the task that calls into that library.

That means that if you want GC-free code, you're going to have to be very careful about which libraries you can call into directly.

That's a good point, and I bet that once Rust's ecosystem picks up it'll be a major differentiator to be able to claim that a given library uses no GC whatsoever (while also avoiding unsafe code, naturally).

FWIW, Rust's stdlib tries to eliminate the use of GC whereever feasible. Cyclic data structures and persistent data structures are probably the only places where GC will end up being necessary.

If somebody thinks that a language with GC could fill the same niche as C++, then well, they must work at Google.

Ohh... huh...

C++ programmers don't go to Go because Go occupies the space otherwise taken by Java.

The only real, modern alternative to C++ is Rust, but it's not at 1.0 yet, meanwhile C++ is still getting new features, all the while preserving compatibility.

Two exemplary reasons why Go occupies a different space: - Mandatory garbage collection. Sometimes you need more control over memory than being able to use just one standard garbage collection. - No dynamic linking. Just think about what would happen if qt, webkit or llvm would be statically linked in in all programs using them.

> Mandatory garbage collection. Sometimes you need more control over memory than being able to use just one standard garbage collection.

Not only that, sometimes you don't want a runtime overhead plain and simple. I don't know anyone in high throughput app development who would be happy with go. You want to generate a sequence of instructions which get executed.

Every language has a runtime. C has libc and C++ inherits it.

Ok, you can use C/C++ freestanding for kernels, but that does not change the fact that C as a language has a specified runtime.

Ada, D

Vala and Python code compiled to generated C code (gobject) seems like interesting option.

Plus all these generated systems are standard and thus you can have different languages linking each other.

And pascal!

Sadly, C/C++ take the world.

What really gets me about go is that it's lame, manual error handling sucks, causing it to be smack dab in a middle area between my current languages of c++ and python. If you don't believe me, note that golang.org's home page example lacks error handling (huge fail).

In c++, I like the predictable destructor model of memory cleanup, and am willing to pay the manual error checking price (too scared to write exception safe code in c++). But I like to put as much complex logic in python code, due to its nice exception handling and GC.

If GO had a more flexible exception model, which by the way does not require forward jumping try/catch constructs, making it easier for people to write 'crash only' error handling without boiler plate, I might consider it. But for me, it's currently typecast as a firmly middle of the road feature set.

That optional error handling still boggles me, particularly since http://golang.org/doc/faq#exceptions makes it clear that they undervalued the the other major family of problems affecting C programs: programmers not checking return codes in often-exploitable ways.

I can understand not going the full Java/C++-level overhead (although Python is a great example of how that doesn't need to be so tedious in practice) but it still amazes me that simply importing something you don't use is a fatal compiler error but the classic "res, _ = something()" responsibility shirk you see all over the Go world doesn't even get a warning.

Maybe Go 2.0 could make the compiler simply do the equivalent of inserting an `if err != nil { panic("Unhandled error at FILE:LINE") }` block after any statement which can return an error and doesn't already have a check. That'd satisfy the low-overhead, predictable flow-control crowd while making the language much safer for large systems in practice.

(n.b.: lest that seem like too harsh a condemnation, I actually rather like Go as a highly appealing alternative to C: the fact that they got so many things right — gofmt alone deserves a medal — makes the minimal error handling feel like a surprising oversight.)

With a parser for Go in the standard library, it would be easy enough to write a source scanner that looked for unhandled error returns.

This is true but that's a non-trivial task even before you get to the question of running it on all third-party code and getting the upstream to patch it. This would solve the problem only in the same way that static analysis tools mean C code no longer has buffer overflows or type conversion errors.

It's interesting to see you arrive at 'crash-only' error handling from C++. This method of error handling, usually known as 'let-it-crash', is arguably Erlang's greatest strength. Unfortunately, the designers of Go hadn't quite managed to crib this part of Erlang.

Once again, Rob Pike speaks about generics and types in terms of C++ and Java, without acknowledging any other languages with static type systems, some of which are more expressive, less verbose, or both.

Surely Rob Pike is not ignorant of ML (1973), Ada (1980), Eiffel (1986), Oberon (1986), Haskell (1990), OCaml (1996), C# (2000), and F# (2005)?

See also the comments on http://research.swtch.com/generic

I just have to quote Pike's comments:

"Early in the rollout of Go I was told by someone that he could not imagine working in a language without generic types. As I have reported elsewhere, I found that an odd remark.

"To be fair he was probably saying in his own way that he really liked what the STL does for him in C++. For the purpose of argument, though, let's take his claim at face value.

"What it says is that he finds writing containers like lists of ints and maps of strings an unbearable burden. I find that an odd claim. I spend very little of my programming time struggling with those issues, even in languages without generic types.

"But more important, what it says is that types are the way to lift that burden. Types. Not polymorphic functions or language primitives or helpers of other kinds, but types."

1. STL != generic types.

2. Lists of ints and maps of strings are hardly an unbearable burden; everyone and their dogs have been doing those in C for decades. But "string" is a poor universal data type. "int" is even worse. On the other hand, if "lists of ints and maps of strings" is your benchmark, you'll likely never understand that.

3. If you can confuse polymorphic functions, language primitives, or "helpers of other kinds" with types, you're probably doing something wrong. (Polymorphic functions, disjoint from types? I think I get what he's saying, but I need to ponder it some more.)

Pike comes out of a tradition, and his work is in some way the apotheosis of that tradition. The issue is that people confuse the dogma they were raised in with truth, and say stupid things like " ... To be fair he was probably saying in his own way that he really liked what the STL does for him in C++."

or Clean (1987)

Most programmers aren't in the latest-language bubble. Hell, my main earning language right now is PL/SQL, which limped out of the shadow of Ada, minus all the interesting bits, clutching hideous licence fees.

The reason Ruby and Python programmers have embraced Go is because they tend to be in the latest-language bubble.

The actual properties of the language are probably nearly totally inconsequential, except as bright lights that attract a certain sort of mind to it.

They probably work in start-ups or are free-lancers on small projects. Or they work at Google.

Less is more, but my ideal 'less' probably isn't your ideal 'less'. That's why we need more... so we can all have less.

C++ covers many domains and programming styles really well. Just because it doesn't cover some as well as some other language, it doesn't mean C++ as a whole is a lost cause. This is true no matter how many domain experts and language advocates attack it. It's C++s huge scope that opens it up to attacks from every corner in the first place.

Frankly I'm getting tired of reading articles from people who think they can proclaim "what C++ is about" any more than I can proclaim "what Go is about" or "what Python is about". Even Bjarne himself admits he doesn't know all the ways in which people use C++, and that's how he likes it.

I think a lot of programmers don't come to Go for the same reason that I don't see myself ever using the language - it's far enough from a C-like language in syntax and ideology that it's not easy to move from the C family to it, and it has some "features" that I do not consider features. Those features make it taste like an academic language, what isn't what I'm looking for in a programming language.

Rust seems very different in that regard, and I look forward to being able to use it in production when the time is right.

I'm a serious C++ programmer, and I'm also in the Rust boat, rather than the Go boat. I've tried Go. More than once. And each time, I feel really restricted. I feel like I have to solve problems a certain way, rather than the way that fits the problem. With C++, I have multiple approaches to solve a problem, and can choose the one that offers the best trade off of performance and maintainability.

For now, if I'm really wanting concurrency, and don't feel like C++ is doing what I need, I'll grab Java/Scala with Akka. They allow me to create a solution tailored to a problem, rather than tailoring a problem to a solution.

A language which had aimed for system programing but not fast compared with C++, now aims for web programing but still harder to use than script languages, static typing but no generic types, forces boilerplate in several aspects (exception, assertion) but no meta programing ability to help fix it, weird design decisions dictated by personal experience, has Google helped marketing but still not widely accepted.

Imho the main thing C++ offers are cheap abstractions. In C++ you can easily build abstractions that do not have any runtime overhead at all. You can write beautiful and general code and still not sacrifice a single cycle for it. Go doesn't offer that, not in the least. As such I think that Go and C++ simple have a very small intersection when it comes to use-cases.

"Less is More" works when you have fewer components but you can assemble them together in various combinations to do everything you could otherwise do with more components.

However, when you have fewer components which can't be combined to achieve certain things that you could do earlier, it is hard to see how you can then say "Less is More".

Coming from game development RAII is a blessing. Not having control of memory deallocation is will not sell. This is definitely not a case of "Less is More", it's a case of "Less is Less".

No RAII is one of the things that really bugs me about go. I can't just construct an object and have it be ready, I have to remember to call some other method. Rust's memory model is also better, since it's akin to shared pointers (which you know the lifetime of) rather than garbage collection.

What I meant was that the way RAII works in C++ is a blessing and it can't work the same way in Golang since it doesn't give us control of when the object is destroyed. So, we are actually in agreement here.

So it seems that C++ programmers are a bunch of silly self-centered people who cannot think as good engineers because somehow they have all been brainwashed (probably by Stroustrup). That is how I read it.

Ah, by the way, did anyone mention garbage collection?

"It seems almost paradoxical that other C++ programmers don't seem to care."

It probably doesn't scratch their itch.

And it doesn't help that where it does scratch their itch it leaves a massive bleeding wound the size of a plate.

I've tried Go for some small stuff and read through "Programming in Go". I want to like it, but it just doesn't work for me. It's at an awkward position half way between C++ and Python. Knowing both C++ and Python, I don't see a convincing reason to use Go.

IMO, part of the reason C++ programmers aren't interested in Go is that C++ is, arguably, the most difficult language to learn. If you're really good with C++ you'll figure out most other languages fairly easy, and probably already know a few languages before going to C++. So I would expect most C++ programmers already have a "go to" scripting/interpreted, high level language that they know and like. And in that case Go just doesn't offer anything they don't already have.

Actually, I think this is the genius of Go. It has the a lot of the expressiveness of Python, but the static typing of C++. This makes it useful in projects that don't require the performance/sharp edges of C++, but that do require the discipline of a static type system.

> It's at an awkward position half way between C++ and Python. Knowing both C++ and Python, I don't see a convincing reason to use Go.

I don't think learning C++ would help you at all when trying lisp-like, functional or logic programming languages. With its single oop model it doesn't even help you with languages that have different or multiple object models.

You also have a static type system so dynamic types are another thing to learn about, as are lexical scope and closures.

After all the effort of learning C++ you've only really learned C++.

I wasn't really talking about the transferability of C++ knowledge to other languages. The amount of effort to learn C++ is greater than the amount of effort to learn most other languages. Whether that knowledge transfers to other languages is another matter altogether.

The amount of time to become a C++ expert is far greater than to become a Python expert or a Go expert. After spending a few years learning C++ spending a year learning Go doesn't seem like a big deal, so why not learn it?

I just don't agree with your statement "If you're really good with C++ you'll figure out most other languages fairly easily".

As I was trying to illustrate there are a wide range of language features available in other languages that C++ does not have anything remotely like.

It takes a long time to learn C++ not because the concepts are particularly hard, but because the implementation of the language is rather messy and the code you produce can easily do crazy things not apparent on the surface.

I code in both c++ and c# and switching between the two is so much easier. I don't even feel like I'm in a different language.

What's a multiple object model?

I'm not a C++ guy, but I will say the most interesting thing about Go to me is that it's a smallish, clean, staticly-typed language that gives a great day-to-day development experience. It also comes with a lot of nice tools for formatting code, documentation, dependency management, etc.

Switching from Ruby or PHP is pretty easy because you get the same (or better) developer experience with excellent performance and nice built-in libs. For writing web services, Go is pretty much ready to roll out of the box.

I think writing Go is fun. I'm not sure that it matters why other programmers do or don't like Go. I think it's great.

Interesting how different the discussion went - minimalism as a design aesthetic back then vs. c++ programmer outrage this time, all because of a different title and abstract.

(218 points, 1 year ago, 123 comments)

TL;DR - Rob finally realized Go is a flop, and nobody wants to use it for serious programming work. So, he whines about it, and blames C++.

For me it's not about the language. It's about pragmatics.

With C++ I can write code that builds natively using standard tool chains and libraries on Linux, Mac, Windows, iOS, Android, and dozens of other less common platforms.

With C++ I can write something once and then ship it on all those things without worrying (much) about the availability of tooling. If I write it using portable programming techniques (mostly acquired through hard experience) I will not have to do very much porting, and the #ifdef mechanism provides an acceptable way to accomplish alternate code paths on different platforms without compromising cleanliness of the build or performance.

C++ has its uglies, but it's not too bad if one knows it well and avoids various also-learned-through-experience antipatterns.

Go is in fact better than C++, but it's not better enough to justify the hassle or the impact in my ability to ship across umpteen platforms.

This is why everyone still does systems programming in C/C++. Write (a bit painfully) once, ship everywhere.

Other languages like Go, Ruby, Python, Rust, server-side JavaScript, various functional dialects, D, whatever, always achieve success in two areas: specific ecosystems/platforms (e.g. ObjC on Apple, Haskell in the world of high-speed algorithmic trading), and server-side coding. With server-side coding who cares if you can ship, cause you don't. But all the world is not a server, and sometimes you want to hit multiple user bases. For that, only C will do.

I actually think a Go (or any other language) compiler that builds to cross-platform C code with a configure script and a Windows build script would be a worthy development. I know many people hate language-to-language compilers, but this would enable one to write in nifty new language X, build intermediate code, then build that intermediate code with the native tooling of every other damn thing.

The other alternative would be for Google -- if it really wants to push Go -- to put a lot of effort behind porting it to offer good high-quality native tooling for everything and everyone's toaster oven. But Google thinks all the world's a server or a browser (and it is for Google for the most part) so that doesn't seem to be a priority for them.

One way they could push it, would be to give first class support for Android development, but I doubt it will ever happen.

Rob Pike did not aim to replace "C++", but "C++ as he used it at Google". Looks like he succeeded at that. For those problems garbage collection is a good idea (did the C++ guys try Boehm?) and compilation time is critical. From the article: "We weren't trying to design a better C++, or even a better C. It was to be a better language overall for the kind of software we cared about."

I agree with most of what Pike says until the end, where he derides C++ programmers:

"C++ programmers don't come to Go because they have fought hard to gain exquisite control of their programming domain, and don't want to surrender any of it. To them, software isn't just about getting the job done, it's about doing it a certain way. The issue, then, is that Go's success would contradict their world view."

This is just wrong. They do not come, because Go simply does not fit the requirements. Go is more opinionated and targets a smaller niche than C++. Nothing wrong with that. If Go and C++ both fit your requirements, Go is probably the better choice.

The only language I know, which explicitly targets C++ is D, but it is not there yet completely. Though, depending on your needs it might enough and it is an improvement over C++.

> The only language I know, which explicitly targets C++ is D, but it is not there yet completely. Though, depending on your needs it might enough and it is an improvement over C++.

And Rust. Although it has its own issues as well.

When typing the comment, I actually had written a sentence about Rust and later deleted it. The concurrency model of Rust is restricted compared to C++ and D: "lightweight tasks with message passing, no shared memory". However, Rust has potential. Maybe. Interesting times. :)

Sometimes I feel like I'm the only guy that likes and works in C. Then I see TIOBE index and alike and see C on top always. Are we, C programmers, not vocal or are those metrics skewed in favor of 'legacy' and existing code?

I like C quite a bit, but if I said, "I like C more than C++ or JavaScript or Haskell", I'd be locked away forever. In a padded room. With nothing but a BASIC interpreter.

i.e. nobody, but nobody says their favorite language is C anymore, they've all found languages that have trade-offs, and most people don't need the trade-offs C can provide. Parsing XML? Un-fun. Regular expressions? Un-fun. GUIs? Un-fun. Hierarchical subclasses of actors? Un-fun.

In a corporate environment (particularly that corporate environment), the option of enforcing "we don't have to use the whole buffalo" exists. That is, you can say "these parts of C++ are off-limits for this code base", and be reasonably sure it will stick. See also: exceptions.

You could do that. Or you could go write a new language and further scatter the masses.

I'm currently a C++ programmer that does not intend to change to go because I want to have exquisit control.

However, this is not because of my views or something like that but because I'm only a C++ programmer at the moment, because I need that kind of control for my current work. If I want to publish a paper or argue on specifics of algorithms and data structues, e.g., like cache efficiency, it is highly convenient to know as much as possible about what your code does in detail and to have the opportunity to force a particular behavior.

The next time I'll be asked to get a complex problem solved somewhat efficiently (I did write a lot of java code before starting to dive into C++ 3 years ago), I might look into go. But as long as the specifics of my solution are "the job", I can still focus on getting the job done but choose C++ over Go.

New C++ features are always nice, because I don't have to use them all and can cherry-pick whatever is more convenient to achieve what I want


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact