Hacker News new | past | comments | ask | show | jobs | submit login
C++ in Coders at Work (gigamonkeys.wordpress.com)
178 points by pmarin on Feb 15, 2011 | hide | past | web | favorite | 185 comments

All this C++ hatred is weird.

If you need to do systems level programming (eg. write a database), and you want to go beyond C's code organization capabilities, and you can select a sensible subset of C++, it's a great language, and you get to control all aspects of memory and CPU.

We're writing our database(s) in C++, and we're pretty much loving every minute of it, and would not switch to C, Java, Erlang or whatever.

Of course we're a startup, so we got to select our own subset, write our own containers, etc:


One downside is it's hard to find good C/C++ programmers on the market. The vast majority of applicants cannot complete our first interview filter (removing a characters from a C string and writing an instrusive stack in C++):


Looking at your code a bit, it seems you are not using templates, operator overloading, exceptions, STL or std::string. Your test framework also looks custom built (aside: IMO UnitTest++ would have fit you style). I think if you just changed class data to structure and member functions to accept structure pointer, you would get a nice C representation (almost every class I looked at had an Init method). In effect you are using C++ to avoid very small boilerplate code in equivalent C program.

No wonder you find C++ pleasurable, you are using it as C with classes! However, if you are working on an existing code base with mix of STL, boost and bunch of other libraries, you would start feeling pain of using C++. In my experience, every library author uses a different subset of C++ and as an application writer, you often have to grapple with the glaringly different idoms in your program. I find boost pleasurable to use; however looking at boost source is enough to melt my mind.

Almost everywhere C++ (language and libraries) try to satisfy every corner case and sacrifice usability of common case. e. g. STL iterators (at least I have BOOST_FOREACH to soften them). Ah well this is probably contributing very little to the discussion but I feel better now :) So let me redirect to [C++ FQA](http://yosefk.com/c++fqa/) and let yourself decide whether the points in FQA make sense to you.

We use templates, exceptions, STL, and Boost and deploy to tens of thousands of devices in the field. We have the advantage of a uniform hardware and OS platform, so we don't have to worry about compiler issues.

We could reasonably replace C++ with a combination of C and a higher-level language. However, C++ is a much better language for our low-level code than C, and we have a lot of low-level code, so we have to take it seriously. If we took the route of using a high-level language to coordinate functionality implemented in C, we would end up using C++ to implement the "C" libraries, just for the benefits of RAII, exceptions, smart pointers, etc.

We could also succeed with a stable and efficient Common Lisp implementation. Other languages such as OCaml might work for us, but I'm not qualified to say.

We wouldn't bother changing at this point, though, unless we decided to create an entirely new product to compete with our current one. C++ is not a problem or a limitation for us. The shortcomings of C++ add a little bit of extra work, but they aren't a multiplier. Most other languages would aggravate our existing problems instead of just adding a constant factor of inconvenience.

And that's with templates, exceptions, STL, and Boost!

Our project uses ICE (http://www.zeroc.com) for some scalable computing and a large part of the system is indeed written in C++ (with bunch of other libraries). However, developing large scale application in C++ requires discipline even at smaller level (Should I use auto_ptr or shared_ptr? Should I pass by reference or by pointer? Macro or template? GCC does what?).

I have found C++ to be too complex to fit within my head. Even though I am programming almost exclusively in it for a while; I get a feeling that I am spending more time on placating the compiler than thinking about code.

For cross platform work that uses a lot of libraries I would not recommend C++. It is where my frustration stems from.

I guess "discipline" is something we're lucky to have a lot of. 99% of usage questions are answered by looking at existing code -- just stay consistent unless there's a really good reason to depart. There are a bunch of things that would get you lynched in our group, but we rarely have to lynch anybody. There's a lot of mutual respect and a lot of respect for the existing coding style, especially among programmers who have done C++ programming elsewhere and suffered through unsafe and unscalable practices.

I think most good C++ programmers spend years coming up to speed. I know I spent many hours coding for fun in college and read through TC++PL a couple of times (also for fun) before I started programming in C++ professionally. It was a wise investment, no doubt about it. It's an open question whether it would be a wise investment today, though.

No wonder you find C++ pleasurable, you are using it as C with classes!

I also find C++ pleasurable (90% of the time) and in my code, I make heavy use of templates (including compile-time calculation/preparation of data, ie not only simple "templates as generics"), exceptions (well, not heavy use, I use them for handling exceptional circumstances, nothing more), STL (especially the containers), std::string (C strings and custom string classes are evil), Boost (not so much, but it has some great libraries), Qt (GUI mainly, but also the support libraries) and Intel Threading Building Blocks. I do not use multiple inheritance, but I certainly use classes and single inheritance, pointers, smart pointers and memory pools.

But I still like C++.

I agree that it has its problems and it certainly isn't a good language for beginners [1], but it is a very flexible and powerful language that is useful for a range of tasks from low-level to high-level code (though I guess it excells at mid-level: higher level that C, but low level enough for performance sensitive code, eg games). I certainly wouldn't recommend it for a lot of tasks, but I wouldn't avoid it either.

I'm not a big fan of the FQA, but the FAQ Lite is very good: http://www.parashift.com/c++-faq-lite/index.html also, Bjarne Stroustrups FAQ is excellent: http://www2.research.att.com/~bs/bs_faq2.html

[1] Even experts got this quiz wrong: https://scapecode.com/2009/10/a-simple-c-quiz/

We do use templates in our containers:


Yes, we try to avoid op.overloading, exceptions, and we explicitly don't use the STL, Boost, <any library>. All the code is ours, which is really nice if you're debugging, finding performance problems, etc. It was our experience on previous projects that pulling in outside libraries in a systems level project like a database ends up wasting time (spent in understanding, debugging and optimizing outside code) over just writing what you need. This has proven to be a good call, we fix all bugs, however deep or nasty very quickly.

Thanks for the UnitTest++ tip.

The STL is not a third party library anymore and most of it has been part of the standard for several years now. I would be curious to know what problems did you have with the STL, and how long ago? In my experience most people who refused to use the STL did it for the wrong reasons. Even the EASTL (which makes a good case against the standard library implementations) implements the STL interface.

The STL is not a third party library anymore - its part of the langauge just as much as the C++ runtime is.

Folks might accuse you of NIH, but the quality of outside libraries is variable, suitable for a prototype but for shipping code you need control of ALL of it.

C++ Standard Library and Boost are of very high quality. It's quite tough to beat them.

Plus they're open-source, so they're theoretically "yours" in the sense that you can compile and deploy your own. If you find a bug in Boost, you can fix it. We frequently deploy open-source libraries we've patched ourselves -- never Boost that I know of, but at least we know it wouldn't be a problem.

Similarly, you can extend them (or at least make your own code compatible).

Things like move semantics and r-value references in C++1x will make them much tougher to beat. And for the most part they're not particularly hard to understand either. The standard library in particular, despite being a generalized library, is highly customizable and can be optimized for special cases (e.g. with custom character traits or allocators). Because it's a standard, good C++ developers should be able to use it. But if everyone reinvents the wheel developers would have to learn a new container or string interface every time they change jobs.

On major platforms. But on embedded etc its hard to use any library - there are unintended dependencies.

{ embedded programmer }

Sure. You definitely have a point, but in context of the discussion here (ie Maro's code), I see no evidence that he is programming for an embedded system.

Also don't forget licensing issues. It becomes important when you make a living of selling software.

Avoiding all other libraries seems an odd choice to me. Are you including std::string in "STL"? Almost all of the "STL" containers are part of the C++ standard now so not using them doesn't make a lot of sense.

I have very little respect for the fqa. It is just a laundry list of complaints; absurd whining about every item in the C++ FAQ. For eg : http://yosefk.com/c++fqa/assign.html

If you are going to complain about a self assignment check in the assignment operator function, maybe you should stay away from any kind of programming at all.

And I don't understand this complaining about which subset of C++ needs to be used. I have written scala code and it most definitely does not use every feature of scala.

The scenarios in which any of the C++ features need to be used is fairly obvious.

1. STL - use it all the time, it is one of the core libraries of modern C++ 2. templates/overloading - You will use these features when you use STL. You typically won't create you own template classes. 3. exceptions - This is one of the C++ features used less often. Dealing with it is no different from dealing with exceptions in other languages like Java.

> I find boost pleasurable to use; however looking at boost source is enough to melt my mind.

The idea is that you use boost source and not look at it. boost libraries are a very specialized kind of template meta-programming code that only container writers have to deal with. I like being able to use a smart_ptr without having to worry about silly memory leaks, none of which would have been possible without templates or operator overloading in plain vanilla C.

There are several deficiencies with C++, that I experience on a day to day basis, none of which are discussed in any of these online C++ rants. I have never faced any software engineering issues because of the multitude of C++ features. That has never been the problem. If anything all these features reduce the amount of boiler plate that I need to write to get my job done.

You may not like STL iterators, but they are a 1000 times better than some of the gnarly C macros I have seen in C code to get STL like containers in C. Once you have seen that kind of C code, you will be glad about the existence of templates.

I think C++'s popularity comes from the fact that it is essentially a better C. Which parts of it are better than C happen to be a matter of taste and for what you need it. I don't think anyone loves all of C++. But I think there isn't a viable competitor in the "C, but better" space at the moment. Go is probably too high level to be a complete replacement. D is... a weird case. I don't think it stands much of a chance in the market because it's not sufficiently better than C++. Overcoming C++'s momentum is extremely difficult at this point.

EDIT: downvoters: if you think I'm trolling, please note that I've written more code in C++ than any other language. I would love a "better C" that's universally better than C++. If you disagree with my points, please elaborate.

> But I think there isn't a viable competitor in the "C, but better" space at the moment.

In the Microsoft ecosystem, C# is this. No, C# isn't OS-independent or hardware-independent or HTTP-server independent. It's not meant to be. C# isn't meant to be an abstractly beautiful language like Lisp or highly performant and portable like raw C. And it's not intended to be part of the LAMP free-software stack; it's intended for a business context that can spend a bit of money to make money. (The C# compiler is actually free, though the IDE isn't and of course the underlying Microsoft OS isn't.)

It's C and most of C++ without the foot-shootiness or Java verbosity. For what it is meant to be and for the problem domains that it serves, C# really does make for great productivity. My company and I have used it as our primary language for several years and I like it a lot. Most of the productivity comes from tight integration to external ecosystem elements that make for a useful application, notably MS SQL databases and ASP.NET web servers, all wrapped within a solid IDE. More productivity comes from getting to ignore things like memory allocation and header files which the language just handles for you. Perfect for simply getting things done in a business context when things like price and portability are not a concern. (When they ARE a concern, then by all means use the LAMP stack and C# is not for you.)

I think most of what can realistically move to high level languages already has (minus legacy code). Java and VB was the first mainstream wave, C# the second. They're sometimes still lumped in with "systems" programming, but I don't think that's a useful description. As such, when I talk about C++, you can assume I'm excluding situations where a managed, VM'd language would work fine.

Calling C# or Java a "better C" is therefore not terribly useful. C is not good at the things C# is good at and vice versa. I use about 6 or 7 different languages on a regular basis, and often the only choice is between C and C++.

I'm not sure where you're going with the second paragraph. C and C++ were never popular for general web app development.

"Perfect for simply getting things done in a business context when things like price and portability are not a concern. (When they ARE a concern, then by all means use the LAMP stack and C# is not for you.)"

You've been brainwashed by Microsoft. No one using LAMP cares about price (that much) or portability ('L' is for Linux). Freedom from vendor lock-in (and in this case, a predatory, vengeful vendor) is the biggest practical advantage. Stability and the thing actually fucking working is another (I never enjoyed babysitting Windows servers; do you?). It's also easier to build robust applications with LAMP.

Enjoy paying your Microsoft tithe, peasant.

Curious; what makes ObjC not a viable competitor IYHO?

D is doomed by the Phobos/Tango schism, no-one would risk their business on it.

Curious; what makes ObjC not a viable competitor?

The Objective-C additions just aren't very low-level. They're basically an object runtime library with some syntactic sugar. This is fine for situations where you want something a bit more high level and dynamic[1], but gives you no advantage over C in something like device drivers or programming embedded devices, because you just aren't going to use those features. Minimalistic (optional!) vtable dynamic dispatch is great for these uses, though, and generic programming can be really nice (despite C++'s version in the shape of templates) even for low-level applications. RAII also is really nice.

D is doomed by the Phobos/Tango schism, no-one would risk their business on it.

If D was that much better, I think a sufficient open-source ecosystem would spring up around at least one of the two libraries that you could realistically use it. As it is, I doubt it would be popular even without the schism.

[1] Though in my opinion, most iOS apps would be easier to write in a managed language, especially if you could call out to C/C++. The Obj-C focus is understandable from Apple's point of view though.

"Though in my opinion, most iOS apps would be easier to write in a managed language"

Maybe, but just as you can not easily dismiss C++ because so many people have written so many successful programs in it, it is difficult to dismiss all of the great software from Apple, and arguably the largest library of software for mobile devices in the App Store, written in Objective C.

You point out the resource constraints that make C++ superior to Objective C and other languages for drivers and embedded systems. Similarly, I suspect Objective C gives Apple the right trade offs for writing sophisticated software for mobile devices that is responsive, uses memory effectively, and does not kill battery life.

FWIW, I agree that it's not a black & white issue, and I don't have an immediate suggestion for a replacement language that would be better than Objective-C in every aspect.

I thought that one of them was going away with D2?

> But I think there isn't a viable competitor in the "C, but better" space at the moment.

C99 is such a contender imho.

C99 is such a contender imho.

C99 is a pretty tiny (but welcome) incremental improvement on C. It's also almost entirely orthogonal to C++'s additions. Calling C99 a C++ replacement is pushing it, so you'll need to elaborate on why you think so. I'd be curious about an instance of something you can do better (let alone fundamentally better) in C99 than C++. I can think of plenty of examples of the opposite.

C++ sometimes makes me want to tear my hair out, sure[1], but when using pure C(99), I find myself thinking, "this code would be so much more compact in C++."

[1] This is usually because either I or someone else tried to be too clever. Otherwise it's because of something C++ does badly that neither C89 nor C99 do at all.

> Calling C99 a C++ replacement is pushing it

I didn't call it a C++ replacement. Just a C89 replacement. "A better C" :)

> I'd be curious about an instance of something you can do better (let alone fundamentally better) in C99 than C++.

It depends. If you need C++'s syntactic sugar for OOP-style code then yes - C++ is better for that. But if you don't do OO-heavy stuff you can use plain C and get away with less complexity (and more control over the instruction cache, etc).

I'm in the lucky position of not needing OO so I get away with using C. And C99 makes my life, compared to the old C89 standard (and the C++ C subset), easier. I can declare variables in the middle of a function (or even within the if statement!). I can initialize structs by its member's names thanks to designated initializers. And the implicit "upcast" from void* to whatever is a blessing (though possible in C89 it isn't available in C++). Oh, and I almost forgot variable length arrays.

> but when using pure C(99), I find myself thinking, "this code would be so much more compact in C++."

Well, this can go the other way round, too. Just take the right tool for the job.

Okay, all of these points are fair enough. I guess I didn't make it clear enough, but:

There's no language currently out there that covers as much ground as C++ and is anywhere near as good at it. C99 falls into a similar bucket as Go or Objective-C in covering a subset of cases where C++ is useful.

The point of the detractors, though, is that if only a subset of the language is good for any particular task, why not skip the subset operation and just choose a language better for the task?

I echo that sentiment. I'm using C++ in my current project (first time in many, many moons), and it dies what we want reasonably well, but it is bigger than any language needs to be.

Because there's less cognitive overhead in learning 5 variations on one language to do 5 tasks than there is in learning 5 languages to do 5 tasks. You don't have to relearn what semicolons, parentheses, braces, colons and so on mean. You just have to learn how inheritance works. Or the STL. Or whatever.

> There's no language currently out there that covers as much ground as C++ and is anywhere near as good at it.

Have you tried OCaml? It's has it's problems, but it's nowhere near as bad as C++.

Perhaps as a counterexample, when I'm write some linear algebra code that I expect to perform reasonably well, I greatly prefer using eigen to other BLAS implementations (well, except the MKL, but that's a different matter entirely.)

I'm curious as to why you wrote your own containers instead of using the C++ Standard Library, Boost, EASTL or any of the other existing container libraries. Care to elaborate? Eg, if performance or fine-grained control over memory is an issue, take a look at EASTL, which EA use in their games (and is designed to be suitable for both PC's and consoles). They are also extremely stable and well tested.

I've never tried to find C++ programmers on the market, but I do seem to come accross fewer and fewer C++ programmers now. I use a lot of C++ myself (amongst other languages, including Clojure, Java, Python and recently SML). I think having learnt it long before becoming a professional programmer may have given me a kind of attachment to it, but I actually enjoy using it (most of the time, I do often wish it did things differently or supported features I see in languages like ML or Lisp derivitives).

As for your interview question, the first seems almost trivial. The second, an intrusive stack, isn't very hard either [1]. I'm surprised that most applicants have trouble with them!

[1] Heres one I slapped together in ~5 minutes (no comments, only tested by running that code, etc etc): https://gist.github.com/827619

"I'm curious as to why you wrote your own containers instead of using the C++ Standard Library, Boost, EASTL or any of the other existing container libraries. Care to elaborate? "

A game developer disusses why he often writes his own data structures and steps out of STL and gives concrete examples at http://altdevblogaday.org/2011/02/15/data-structures-one-siz...

(fwiw I don't have a dog in this discussion. I don't use C++ and never will. The above is offered only because I had that page open in another tab when I read your question, and it seemed relevant)

Thanks for the link, very informative.

Personally, I will write my own data structures, containers or memory allocators when I have a task that doesn't fit nicely into existing libraries. There is nothing wrong with that. I am curious though when people only use their own. For example, I believe that when writing custom containers, providing STL compatible interfaces (perhaps alongside a non-compatible extension if needed) is a good idea because then you can use the STL when it makes sense, but your own when it doesn't (and pass data between them easily).

Having said that, I did not spend long looking at the code in question (nd in the sibling comment a good reason was presented for at least some of the custom containers), so perhaps it isn't realistic or reasonable to mix and match in this case. Still, I like to understand peoples reasoning because it helps me improve my own coding practices.

(Another author here)

The basic issue is that we didn't want to include external libraries, because it's a pain to debug, pain to build on multiple platforms and there are licensing issues.

Also we don't use so many datastructures, so we rolled our own and we are happy with them. For example we have an intrusive treemap, that has a function for returning the middle of a sorted key which is useful when you want to split a keyspace into two. Not sure if any of STL/Boost supports that.

Your solutions looks good, you are hired! ;) Seriously, we know that it is not that hard, but still the applicants have problems with it.

The C++ standard library shouldn't cause any problems on non-embedded platforms, including phones that support C++. Boost _may_ be a bit more tricky, though most of it is header-only, so it depends on the compiler more than the platforms (but if you use g++, there shouldn't be any issues). As for licensing, STL and Boost are both released under very unrestrictive licenses, similar to the C++ runtimes, so they won't cause you any licensing issues. Not too sure what license EASTL is released under. I know its an unrestricitve license, but don't know about the exact terms, so it may or may not be suitable.

Sure, obviously for niche tasks a general library may not perform as well as a container custom-made for the task at hand. That certainly is a legitimate reason to roll your own (though, if it were me, I would make my containers STL compatible, similarly to how Boost is STL compatible, that way you can mix and match between standard containers and custom ones and easily convert between them (eg, by using compatible iterators)).

Regarding applicants, I am just as surprised as you are. I guess I didn't notice how much C++ is diminishing in industry (outside of its niche areas, anyway).

PS: You're database does look interesting. May have to keep an eye on it.

select a sensible subset of C++,

The issue is picking the particular subset and then getting everyone to adhere to it. That works okay in mature engineering shops such as google:


So I guess I'd say it requires more discipline than, say, Java. (One could similarly argue that writing good Perl requires more discipline that writing good Python.)

There's also John Carmack's quip:

IMO, good C++ code is better than good C code, but bad C++ can be much, much worse than bad C code.

-- http://twitter.com/ID_AA_Carmack/status/26560399301

Hm the Google guide looks like yet another programmers personal expression of style. The "pretty print police" at work. There may be some defensible (i.e. related to actual good programming practice)comments in there, but much is the usual personal preference stuff about braces and capitalization.

Different folks internalize different features of C++, for good or bad. To take someone's comfort zone and institutionalize it is arrogant.

> much is the usual personal preference stuff about braces and capitalization.

There's nothing wrong with that; having that stuff in your style guide means that you get more consistent code, which is easier for everyone to read whatever their personal preferences.

(General point: Having a rule that says "Everyone must do X rather than Y" doesn't mean claiming, or thinking, that X is better than Y. There's not much to choose between driving on the left side of the road and driving on the right, but it sure is valuable to choose one or the other and stick with it.)

Braces are such a stupid thing to be worried about (thank you, BDFL!), but things like variable naming are very valuable in helping people parse code quickly. If you look at my code, you know instantly which variables are global constants, which are class instance, which are method parameters, and which are locals. Those are the valuable additions; the rest is an expression of an anal, alpha engineer or a group that said "let's just start with this thing I found online".

Actually most of the guide relate to good programming practices, and not just stylistic preferences. Also, I would argue that taking offense at style guidelines (which bring a lot of benefits, especially at a big software company like Google) is unprofessional.

One downside is it's hard to find good C/C++ programmers on the market.

It's interesting that C++ is increasingly becoming a niche skill. When I started programming you pretty much had a choice of C++ or VB if you wanted a job.

Suits me fine though. A lot of the most interesting work (DSP, kernels, graphics etc) is in C++.

For those that don't know what it means for a data structure to be "intrusive", here's a link:


The remove_char function was fun to write, and I found a bug in my first implementation! The Intrusive Stack problem; however, is confusing to read. I searched for a description of what an intrusive stack is on DuckDuckGo, and the first result was for the Scalien job application. Further down on the page (through what appears to be several quasi-related links) I found a Boost article on intrusive containers which explained the idea much more clearly. After reading the Boost description, then re-reading the Scalien description another 3 or 4 times, I'm not comfortable to write an intrusive stack.

Are you sure applicants aren't failing because the application problem for the intrusive stack is written in a confusing manner?

"Are you sure applicants aren't failing because the application problem for the intrusive stack is written in a confusing manner?"

I don't know, its very possible. I found the application problem very easy to understand and writing an intrusive stack is really easy, so unless I'm very disconnected from the "average applicant", who apparently 1) isn't very good at C++ and 2) doesn't know how to implement common data structures, the problem must be with the application.

Edit: "I'm now comfortable to write an intrusive stack." I mispelled it as "not" by accident.

I tried editing the post, but edit functionality doesn't appear to be working correctly.

"...you can select a sensible subset of C++"

I love programming in a sensible subset of C++ too.

The problem is that C++ has long history and when it began, many programmers had erroneous ideas about OO - some skipped it completely, some built absurdly deep inheritance trees, some engaged in ridiculous tricks to stay within "doctrine" (remember a fellow who had a pointer-member to the same object in a parent and a child class just avoid something about down-casting).

So from this history, there's a lot of really bad C++ code "out there".

And just much, when you take two code bases or two groups of programs, each happy with their different sensible subset of C++ and combine them, without being careful, you also wind-up with a hellish mess.

> and writing an instrusive stack in C++)

What is an instrusive stack?

It's explained in the link.


I guess I was thrown off by the typo and was looking for something else.

I'm currently implementing a complicated algorithm in C++. I don't have to use many language features for that, so I don't have a strong oppinion about them.

But what I can say is that, compared to Java, the tools for C++ coding suck. Hard. In Eclipse you can write some gibberish and it automatically turns it into valid Java that mostly does what you want. The Java debugger is excellent and works without hickups. The experience you have with Eclipse and CDT just isn't the same. No automatic inclusion of header files, autocompletion doesn't always do what you want, no documentation included, even for the STL. Working with gdb is painful, most of the time it is not possible to properly inspect datastructures in memory and it is generally just easier to litter the code with print statements.

Part of the tools issue arises because C++ insanely hard to parse (the preprocessor makes it even harder); recovering programmer intention from something that is "almost" C++ is an exercise in futility. This is, I think, a genuine problem with the language. There are of course good arguments in the "Java wouldn't need such good tools if it was a better language" camp; best of both worlds would be a language that's easy to write tools for but doesn't really need them.

With regard to debugging, I think you're being unrealistic. You're comparing debugging on a fully introspective and managed virtual machine to debugging annotated assembly language. Apples & oranges. If a fully managed VM meets your needs, then use it. If it doesn't, well, welcome to low level programming.

You also make it sound worse than it is; JVM debuggers aren't perfect either: they debug Java-the-language fine, but other JVM languages are more problematic. Good luck with JNI, too. Likewise, there are C++ debuggers which are STL-aware, and even those that aren't can usually be coaxed into displaying pointers as arrays or as supertype pointers. Using gdb directly is rarely necessary; there are perfectly decent frontends. If you're having problems because you're trying to debug compiler-optimised C++, then you're simply running into the limits of the platform. Debugging post-JIT Java is no fun either.

Can you name some "better than gdb" debuggers. I'm pretty frustrated with gdb, but all the alternatives I tried in the past have sucked as hard or more than gdb.

On Unix systems, you usually still want to use gdb, just with a Frontend. On Mac OS X, this will usually be XCode, on KDE I like to use kdbg. There is ddd, too, which is ugly as hell but works well.

On Windows you probably can't beat Visual Studio.

I'm currently having to use pure gdb to debug some Mac OS X kernel code, and I feel your pain. Using a frontend is much nicer.

  In Eclipse you can write some gibberish and it automatically turns
  it into valid Java that mostly does what you want.
Isn't this the reason that many treat java programmers as suspect ? I think IDEs are fine once you have mastered the language and the syntax, but it often masks the gaps and deficiencies in one's own knowledge. I find that a bit scary.

I understand that mine would be a minority view. Guess work at the IDE together with TDD has significant fan following. Apparently it gets the job done, if that be so, who am I to complain.

Edit: It seems I have hurt someone's ego :|

No, I don't think this is a valid reason - in fact, I think it's a prejudice which disguises an ignorance of why languages with productive IDEs are so good.

An IDE with code completion means you can get more done without having to waste time studying API docs; in fact, the IDE's editor can itself become the most efficient doc lookup tool. This gives you more time and attention space to point at your problem domain, rather than mere incidental details.

Object orientation with static typing is a big contributor to this, because you start out with what you want to act on - the noun, which is usually a local variable or field - and get completion on the actions available on that noun. The contrast with functional and procedural styles is stark: there, you need to know the symbolic name of what you want to do to something you already have in hand, but editor completion is almost powerless to help you, because of the order of tokens in the syntax.

Statically typed "pure" functional languages have somewhat different tradeoffs when it comes to looking up functions; the style of code completion common in OO languages doesn't work, but because the type of a function encodes so much more meaning about what the function does, you are often able to find the function you are looking for just by giving its type to a tool like Haskell's Hoogle. This is less automatic, but it enables you to search a large array of libraries simultaneously, which is very helpful when you don't know what library contains the function in question!

Yes. And there's no reason you couldn't integrate Hoogle in your editor.

I think C# really exhibits this. Almost everyone starts (and stays with) Visual Studio, which has extensive auto-completion, meaning if you ever use a plain text editor, it's startling how little of the library you have in your fingertips.

I don't know about the connection with Java coders being suspect - I would have suspicions of anyone who's never stepped off a VM into an actual machine.

I don't care for IDEs either: the farthest I get is my emacs textual expansion.

Eclipse and CDT for C++ coding sucks. However this doesn't mean there aren't any good tools. Both Visual Studio and XCode are very good for working with C++.

Do you have any recommendations if you're working on a Linux platform?

This is purely out of curiosity. I'll stick to my standbys of Vim and a terminal window.

The best IDE i was able to find for C++ on linux is Qt Creator. Its only drawback is that it's quite Qt integrated, but with very few tweaks you can make regular C++ projects with no Qt dependencies.

Everything from the debugger to the code completion does work well, which is more you can say about most C++ IDE on linux.

The current Qt Creator (or I think its beta version) really has the best available C++ syntax support (which has direct impact on stuff like auto completion) I have ever seen so far.

The second best I have seen was in KDevelop (when configured correctly -- I remember it only worked correctly when you pre parsed the STL library).

I am also very excited about the clang development. It was developed in a way that it should be very easy to integrate it (or parts of it) in an IDE. Xcode4 will do that. And others probably will follow at some point.

I recently came back to C++ after many years. I spent days trying to set up a good environment with vim, then I tried with jedit http://humbertook.blogspot.com/2010/11/personal-preferences-... , Finally I started using Qt Creator. It is not as good as Eclipse, but it is a big improvement over vim; specially, if you are already using QT.

Try KDevelop 4. Its semantic highlighting is really awesome. And most importantly it doesn't get in your way. Unlike Eclipse or even Qt Creator which I find frustrating to work with.

On Windows I recommend Code::Blocks (http://www.codeblocks.org )

Have you tried NetBeans?

Whilst I couldn't find a way to automatically include header files, once they're added NetBeans automagically switches on autocomplete and finds documentation where available. I've used it for my own C++ projects (using headers and libraries from others as well as my own) and had no problem. I'm by no means an expert (used C++ for only a few months, coming from a PHP background) but I managed to get a few CGI applications working using Xerces-c

(I'm gonna get reamed for this, but...) Visual Studio is great and with the proper setup can do all of that and more. Obviously it's Windows only, so that may be a problem.

Frankly, I think much of people's hatred against C++ is the relative lack of good tools on non-Windows platforms.

> the tools for C++ coding suck. Hard.

?? What's wrong with vim? (or emacs, or your flavour of choice)

The verbosity of the language?

That's interesting to me that so many people dislike C++. I prefer it to Java (which I find extremely verbose), don't think its features are really that complex and like its speed. I think it's a powerful object orientated statically typed language.

It seems when compared to other statically typed languages (it doesn't really make sense to compare with dynamically typed languages) C++ does a good job. The STL prevents you from having to reimplement many things you might otherwise need in C and the syntax is more natural or intuitive when compared with C as well.

I have a feeling people just prefer what they're familiar with and make sweeping generalizations or exaggerated claims about other languages and technologies.

A lot of C++ bias lingers from the days when compiler support for more sophisticated features like templates was very immature and buggy. C++ is much more pleasant to code in now than it was when I first used it in 1997. I certainly wouldn't want to write a webapp in C++, but for anything that requires speed and direct control of memory C++ is still really the only game in town and I quite enjoy it when it's the right tool for the job.

People describe Objective-C as "c with objects" done the right way but I'd much rather code in C++ than Objective-C. I also don't really understand why people say they prefer to write in pure C, at least not in 2011. Hacking together a homegrown OO system in C with structs and function pointers and macros doesn't sound any easier or simpler to me.

That makes a lot of sense. I hadn't thought of its past when the compiler wasn't as solid as it is now. I suppose I'm spoiled by the fact that a lot of that had already been completed by the time I started coding.

> wouldn't want to write a webapp in C++

Actually, POCO has all the stuff you need to write a web-app: templating, form handling, http request/reponse objects, etc. Not that I've done it, but it looks like you could write a web-app using POCO::Net rather painlessly.

I like C++ and don't understand the frustration others express. I don't use a subset, and neither do the other people I work with. We use pretty much the whole thing, TMP, macros and all.

Being well familiar with C++ I can never see a clear reason to prefer something like Java. Qt is better for GUI stuff. POCO is just as good, or better, for network stuff. With C++ you can go ahead and marry the host platform's system calls when that makes sense (i.e. write a unix program).

When it comes to scientific or graphics type libraries it seems like the premier solution usually has a first class C++ interface. The Java equivalents seem harder to evaluate. I did a bunch of GIS stuff recently, using proj.4 and various other things. It sure appears to me that the Java/C# equivalents of the GIS type libraries are less supported and less popular ports of the C++/C ones.

If you have perl/python and C++ in your tool belt, I just don't see the need to reach for the so-called C++ replacements. Aside from hiring considerations, that is.

Being well familiar with C++ I can never see a clear reason to prefer something like Java.

I also much prefer C++ as a language but garbage collection is such a huge productivity win that I'll always use a GC'd language if it's appropriate. No matter how cleverly you abstract manual memory management away in C++ it's still a real chore and a source of bugs.

In which way java is more verbose than C++? People in Java tend to choose longer names but, that is more programming style than something in the language.

In C++ you have to declare your functions in a header, then you have to do it again in the .cpp, I dislike it because if I change one I have to change the other one.

Man I am tired of hearing what Zawinski thought about C++ fifteen freaking years ago. If you're good at C++, use it. If not, don't. If you want to get good at it, practice. How is that different from any language?

One thing to remember, especially when considering the discussion of C++ at Netscape, it took awhile for most compilers to implement a consistent set of the C++ features. Netscape was being built across a dozen or so platforms and this was a major concern. If you have built Mozilla recently, the process to do so is not a couple of click action it is somewhat involved. It was worse way back when.

I've used C++ at the last for companies I have been involved with. The only consistency has been that no place used the same subset of C++ features. It usually boils down to what subset the team members are comfortable with. Variations have included exceptions or not, whether or not to use multiple inheritance, STL or Boost (yes, boost isn't really part of the C++ standard, but things like shared ptrs are in TR1), whether or not to use TR1, etc.

I personally don't have a strong opinion, but there are people that abuse C++ in ways that produce lousy code -- classes as a way to have nearly global variables, the whole C with classes approach, etc.

You really should use the right language for what you are working on and that plays well with any libraries you may want to use.

For curious folks, here is Mozilla's C++ Portability Guide (last updated July 2010, so it is still pretty recent). It is not a style guide. It contains some good lessons about writing portable code for "25 different machines and at least a dozen different C++ compilers."


The real shame about C++ is that it's the only mass market C-level language with advanced features built on top of it. Don't get me wrong: I agree with the author and his interviewees. I think C++ is a horrible language for the very reasons described: everyone uses a different subset of it and it's hideously complicated.

For a more current view, see Linus Torvalds' comments [1] [2].

What I would like to see is a language very much like C (being low-level) with a few features built on top. What those features are I guess is the real trick.

One thing I like about C++ is the constructor/destructor mechanism. It's very explicit. The trend now is towards garbage collection (which, admittedly, you can implement in C++). This is a trend I largely agree with. It's quicker and cheaper to write software that way (IMHO). But there is still a place for simper schemes as all GCs I've come across have issues.

One thing I think such a language shouldn't be is object-oriented. To quote Joel Spolsky [3]:

> A lot of us thought in the 1990s that the big battle would be between procedural and object oriented programming, and we thought that object oriented programming would provide a big boost in programmer productivity. I thought that, too. Some people still think that. It turns out we were wrong. Object oriented programming is handy dandy, but it's not really the productivity booster that was promised.

Or at least such objects should be very limited in scope. Linus addresses some of the reasons for this, such as knowing which function is being called by simply looking at a code snippet (typically a patch in his case).

When writing low-level code I certainly do much prefer C over C++. But having the tools to build automatic reference counting into C would be really nice at times.

[1]: http://thread.gmane.org/gmane.comp.version-control.git/57643...

[2]: http://www.realworldtech.com/forums/index.cfm?action=detail&...

[3]: http://www.joelonsoftware.com/articles/APIWar.html

It's funny because back in the 80's we had a language that was already at where c++ and java were going. It's called Ada. Back in 83 it had built in concurrency, generics, etc.

The problem was, back in 83 the compiler was the size and complexity of a C++ or java compiler from 1995. Couple that with some horrible implementations and Ada never took off much beyond Avionics/security/life critical applications.

Which is a shame, because comparing ada to C++ or java today and imnsho Ada wins hands down. The compilers are faster, the code very nearly as fast (In Ada you get things like runtime array checking which slows you down). Unfortunately the programming language wars aren't about technical merit (Java would never have been popular otherwise) and more about social popularity contests/network effects.

EDIT: I might add that at this point ada is /far/ simpler than both c++ anda java. Jean Ichbiah thought carefully about what was needed for large, long lived projects (millions of lines, 30+ years life) and it shows. They got pointers right the first time - access types. verifiability - testing your code is all the rage now, 27 years late. etc. Best yet, the ecosystem around the language is mature. That is unlike c++ you don't have a dozen different approaches taken (template hell), and unlike Java you don't have bureaucratic overload.

People keep bringing up Linus' C++ in git rant, and I don't understand why it is used as an example, as it contains no real substantive reasons as to why to dislike it.

There is no real argument made one way or another. If we look at the linked article there are real arguments as to why people don't like C++. Templates, multiple inheritance, and a whole other range of stuff. All Linus is yelling is "I hate C++, I hate STL, them Boost guys aren't worth anything" with absolutely no reasoning what so ever.

As a project leader it is okay for him to disagree about using C++ within his source code base, and within context it is a perfectly valid email, but to use as an example as to why C++ is bad, it is a very bad email to quote.

From my impressions of how he conducts himself, Linus is usually yelling with no reasoning what so ever.

He explained it more than once: Nuances are lost in the internet on one hand, and (apparently) Finnish culture is very in-your-face with everything made more extreme for dramatic effect.

You may not like it, but it is extremely effective, and Linus _is_ herding thousands of developers, he has to be effective.

Nuances are lost on the internet? Because I've seen videos of him giving talks, and he acts pretty much the same way. What nuances are being lost?

Why do we excuse bad behavior from people?

Benevolent dictator

i happen to think that's the ideal government. it requires a dictator with the right qualities, and they are rare. Thus, what is possibly the 2nd best government type: democracy.

Like at Futarchy for an alterative model. (Though you could classify it as democracy.)

The problem, of course, is that what is considered "benevolent" is entirely up to the dictator to decide. That alone makes it a non-ideal government type.

No, it is up to the coherent extrapolated volition[1] of the dictator's people. (At least, it seems like a good guess.) What makes a government less than ideal is its failure to apply that volition. The question then is how hard a given type of governance is likely to fail.

Now I agree that for a host of reasons (like taking one's own volition for the country's CEV), dictatorships do not rank high.

[1]: http://singinst.org/upload/CEV.html

The point of a dictatorship is that the people don't have a say. The dictator does what he wants and portrays himself as benevolent through state-run media.

Absolutely, and it works well for him. In this case the yelling is being held up as an example of why not to use C++ which does NOT work as a valid argument.

I strongly reject the idea that object orientation hasn't provided a big boost to programmer productivity. In my view, it's enabled building systems that are orders of magnitude larger and more complex because of the discipline it enforces in modularity and the discoverability it lends to APIs.

I mean, do you remember the C written in the 80s? Often, the only unit of modularity in common use was the translation unit, with static globals used freely, structures with no visibility protection, having their inner guts manipulated by different modules, frequently by manually tweaking the pointers etc.? If you had an API that protected structural guts, you were almost always essentially programming to an object oriented style, only with no help from the language and compiler.

I think a lot of the benefits of OO reuse have been lost due to platform shifts. If you build a system on NeXTSTEP in the early 90s, then throw it out and switch to Java, then throw it out and switch to C#, you're naturally going to lose a lot of the efficiencies you theoretically could have taken advantage of had you somehow managed to stay on NeXTSTEP or OpenStep all along.

And then, of course, you have mergers and acquisitions leading to code being thrown away or warehoused or merged with other systems, or new management deciding to start from scratch.

Basically, the industry doesn't really have a long enough attention span to really make the most of OO code reuse.

People have long talked about how OO failed to generate code reuse, but I don't understand how they do it with a straight face. You reuse masses of OO code every time you write a non-trivial program using a modern standard library.

I think the people making this criticism are looking in the wrong place for the reuse, and / or had weird ideas of what could be reused, and how it could be reused, to begin with.

The reuse that OO gives us comes from raising the level of abstraction with which the program is written, in particular because runtime libraries are so much larger ("batteries included", etc.). The hierarchical namespaces and encapsulated structure of modern class libraries reduces their cognitive overhead. For example, in .NET, you can work with WebRequest, TcpClient or Socket, depending on what level of control you need; and the conventions for working with these things are pretty uniform across the board. Larger applications are self-similarly written at a higher level, reusing code within themselves in a framework-like way.

But the people who thought there was going to be some kind of central library of domain-specific classes in your company, that you would reuse in multiple disparate applications, I think that was always fairly naive; reuse implies coupling, and coupling of things that are individually subject to change in ways that affects their users is fraught with problems, and always has been.

The best candidates for reuse aren't usually representations of the domain concepts that are central to a business, because each application in the business will probably be wanting to do something quite unique with those domain classes. Rather, it's concepts that are self-contained, universal, not likely to change much over time nor need different intrinsic behaviour from application to application, which are best suited to reuse.

"You reuse masses of OO code every time you write a non-trivial program using a modern standard library."

How much of that is due to OO, though? I reuse masses of code without OO all the time.

The biggest problem with OO is the abuses. I recently worked with a large codebase where /none/ of the behaviour for objects was in the objects themselves. Some Architect had come along and beaten it out of them with an extreme application of the 'too many patterns' anti-pattern.

One way they justify this is to 'reduce coupling', and so your comment about coupling tripped a red flag for me. Whenever I want to take the piss out of an Architect I just tell them that we should 'add an extra layer of indirection' to the design. 99% of the time they agree without realising that I'm satirising them.

"where /none/ of the behaviour for objects was in the objects themselves"

Actually, this could be a good thing, depending on what the goals were. For example, non-member non-friend classes are often more OO than putting everything into the class[1]. Furthermore, if you want to use function overloading for multiple dispatch, the keeping the functions separate from the objects is also useful. If your system is highly concurrent, keeping the code and data separate can be quite helpful. Also keeping data in structure-of-arrays form and using external functions to operate on these arrays can make huge differences in terms of cache usage, potential parallelism and vectorization of instructions.

Coupling might be a reason to do this, but its certainly not the first reason I think of. There are plenty of better reasons (and as always, not all reasons apply to all codebases).

TL;DR: There are lots of reasons why doing this could be a good thing.

(fwiw, I like to do this with my core data structures because I believe code and data should be kept separate (and that data is the more important of the two). I do like to provide normal objects as an API though, because I often feel that its a natural interface, but the internals of my code are rarely very OO in the C++/Java sense).

[1] http://www.gotw.ca/gotw/084.htm

"But the people who thought there was going to be some kind of central library of domain-specific classes in your company, that you would reuse in multiple disparate applications, I think that was always fairly naive"

Well, perhaps, but I've seen it done successfully in NeXTSTEP/OpenStep shops back in the day, such as an investment bank. Not one big library, but a few frameworks so there's some separation.

The team I was on had our own frameworks, shared among the apps we developed. We also used/built our frameworks on frameworks from the bank's Architecture group, which were also used by other development groups.

I agree. I'm not a big fan of OO, preferring functional approaches in most cases, but I usually use some kind of OOP either to tie things together or to provide easy to use interfaces to the system, as I find constructing and passing objects to modules is often a natural way of interfacing with them, especially if the system is complex and the objects are used to "insert" code into the system (eg, algorithmic skeleton style). So even though I dislike OO as a whole, I think it has a very useful and important role in programming and it certainly helps us bring ever-more complex problems to a level we can reasonably deal with.

Having said that, there are some things OO is bad at, eg concurrency.

How does OO have any effect on concurrency? I'm genuinely curious here.

Mutable state.

OO prioritizes encapsulation ahead of immutability as a way of making the problem of a mutable struct tractable. Most OO programs are graphs (picture as a complex web) of mutable objects holding references to one another, synchronously passing control from one object's method to the next (picture as a spider travelling around the web, making modifications). Concurrency means there is more than one logical point of control flowing around the graph, making modifications. To ensure modifications are consistent, one must now fight against the interconnected nature of this graph, and make certain areas mutually exclusive; that means you need to set up gates on all the edges leading into those areas (often called semaphores or mutexes).

But OO gives you precious little help in setting aside these areas, and guaranteeing internal consistency in the face of multiple mutating points of control.

Programs in a functional style are different, with their heavy emphasis on immutability. In this case, the web, as it were, is fixed and cannot be changed; instead, if there is to be a modification, a new copy is created (possibly only of the local area of change, and the unchanged portion included via references). Since there is no way to change data, there can be as many points of control performing operations as desired. (Actually, one of the models of computation used for functional programs is that of graph reduction, which pictures the program as a web of expressions rather than structures; and that big expression is iteratively made smaller by calculating different parts of it. In principle, the more points of control you have doing these calculations, or reductions (in the same way as '1 + 2' can be reduced to '3'), the better.)

Programs written in an agent / actor style are also different. Here, there is no single point of control flowing from one node in the graph (web) to the next; instead, each node has its own little point of control, and it reacts to messages coming in from each edge, and sends out messages along edges in response. This makes the program locally single-threaded, but globally concurrent.

You said it better than I could have! :)

Code organization aside, how is OO any different from procedural programming with regards to concurrency?

I would say that procedural code is less inclined towards both immutability and encapsulation than OO; I would say to a first order approximation that it tends to be worse than OO for concurrency.

But there are moderating factors. Procedural programs that don't enforce strict modularity conventions tend to be bound in size by their complexity, a complexity that works against the level of understanding necessary to make things both concurrent and correct. Meanwhile, procedural programs that do enforce strict modularity will probably be using conventions that emulate a different paradigm, and will inherit that paradigms' costs and benefits WRT concurrency.

In 15 years of experience where Java sadly pays the bills, for example, systems written in that language that had a modeling approach that tended toward immutability and tended to limit the visibility of mutable state throughout the system were also more understandable, more amenable to change and therefore better than those that weren't. Irrespective of concurrency and parallelism!

Yes. Arguing that immutability is possible in Java, is a bit like arguing for good code in PHP. It's possible and advisable.

But Java naturally tends towards mutability. And you will have to work harder to make it immutable. Java also does not give you as many tools for this kind of style as functional languages usually give you.

First class functions, a rich type system and a library full of immutable data types come to my mind.

Also OOP (as seen in C++ and Java; the original ideas of passing messages to objects sounds like it would have suited concurrency a lot better) encourages mutable state. Sure, you can program with immutable state (and many people do - I certainly do), but it feels like its going somewhat against the grain of OOP in those languages.

Worse, OOP tends to hide the mutable state away internally in objects (or in objects stored inside objects etc) - OOP's data hiding and abstraction support can go against you here. Sure, good programming practices and discipline (eg, const correctness in C++) helps, but the languages definitely encourage mutable state.

I wonder how using methods-as-messages and turning OOP into a message passing system not dissimilar to the actor model would work in practice (both from a usability/syntax and concurrency view).

I'd say the biggest boost came from modules, and the control of side effects (by, among other things, limiting the use of global variables).

Now, do you see a significant additional boost that were provided by the various OO styles out there? More specifically, what significant advantage do classes have over modules?

I'm designing such a language. It's called C³. I would like some feedback on my design "rules of thumb"[1]. I'm literally still at the whiteboard! [2]

[1]: http://blog.c3lang.org/aims-and-rules-of-thumb-for-the-desig...

[2]: http://c3wife.com/?p=523

Just a tip - the superscript makes googling for or talking about your language a hassle.

I like your comment overall, though I don't find Linus' C++ rants all that useful. Also, no, you can't implement a GC in C++. (Which in turn makes its operator overloading and exception handling defective and throws reflection out the window.)


"C++ does not feel pain. It can't be reasoned with. Starting a fight is a big mistake. If you want to use C++, you must learn to love it the way it is, in particular, manage your memory manually."

The Boehm-Demers-Weiser collector is available for C++.

I guess the grand-parent commentator meant moving GC?

I am keeping my eye on Clay http://tachyon.in/clay/

Similarly, I've been hoping that Rust (which is somewhat run by Mozilla labs folks) would turn out well:


Perhaps Google's Go will end up being the answer.

I think many people were hoping for it but Go fell fatally short in 2 key areas IMO:

1 - Error handling - Go doesn't have any powerful mechanism for error handling. The code I have seen is rather inconsistent and it doesn't solve the main problem from C of forcing people to handle errors.

2 - Polymorphism - They left any kind of generics out. I think this could go a long ways for (1) too, just look at Either or Maybe type in Haskell. Combine this with some pattern matching and you have a really powerful way to enforce error handling and have polymorphic data structures and functions. IMO this is the biggest failing of Go.

I think the failure to provide those two things also makes Go a future mess. Adding them later either breaks a lot of outstanding code or it means you have to versions of Go, the original and new which is a mess.

Adding them later either breaks a lot of outstanding code or it means you have to versions of Go, the original and new which is a mess.

Actually, the creators of Go are still not afraid of breaking things. For example, recent release (2011-02-01.1¹) changed the syntax of non-blocking channel operations, and it broke code.

[1] https://groups.google.com/d/topic/golang-nuts/uHfjRyO1Q6c/di...

That is good to see. But I would argue that the change described here is a rather small change while the 2 missing features I described have a larger effect on how you design your application.

Well, adding generics shouldn't break working code.. it's an additive change.

It says something that for the built-in collection types, Slice and Map, they had to hack around their own API and implement this-case-only ad hoc generics. Slice and Map should be fully implementable within the language.

Why should Slice be fully implementable within the language? C++ is another systems language that decided that everything should be able to be done in libraries, and we've all seen the complexity that results.

Well, I'll back off slightly from the assertion "fully implementable" because you have the f = s[i] syntax which wouldn't be fully implementable unless you allow the user to define [], which I agree is crazytown.

However, the make() functions for slice and map are the only functions in the language that allow you to pass in a type and get that type back as a return function. Why can't users have that functionality? Generics seem to be the easiest way to get it.

I think I agree, but I'm not good enough at language design to be sure. :) In the current language, all uses of type literals need to be known at compile time; if I could pass a variable containing a type to "make" or use it to instantiate a map, the language needs generics to deal with that, and all of the complications that come with it. The current idiom seems to be (if I wanted my own "make"):

myslice := mymake((MyType)(nil), 10).([]MyType)

which is much uglier than the built-in way. However, I'm kinda okay with reflection-laden runtime tomfoolery looking a bit messy, mostly because I rarely use it, and when I do I like it to stand out and be obvious what is going on.

I don't agree with #1. I prefer Go error handling to pretty much any other style I've seen.

To date, I have seen no language except Qi that (1) supports sum types, and (2) isn't a statically typed functional language.

Anyway, it's not like a wannabe OO language would adopt Algebraic Data Types. "Those are for these ivory tower languages that no one use in the real world", one might say.

Qi: http://en.wikipedia.org/wiki/Qi_%28programming_language%29

Or try D.

No. And thank science for that.

You are talking about Objective-C.

The thing that OO does, and does really well, is that it makes handling special cases really easy. Anybody who has come across 'business logic' (a beast related to 'military intelligence') knows that special cases are going to be the bane of your existence. Ta da! Problem solved.

Why then, you might complain, have we not seen the benefits of OO? Because the majority of programmers (usually by their own admission) suck at design.

About constructor/destructor: in C++ you call allocate object on the stack, destructor will be called when you get outside the scope of the variable. That feature, dumbed RAII in C++ lingo, may be the simple GC your looking for : no explicit delete, good locality of the destruction. It also allow correct handling of exception.

Ironically, it's copy constructors which show up to break exception safety horribly if you're trying to be ultra-safe - cf. no stack<T>::pop() that returns the value popped. What C++ adds with one hand, it takes away with two more hands that popped up out of nowhere like mutant growths.

C++0x solves that issue by introducing rvalues.

There is some decent code in Doom Classic, but it is all C, and I would prefer to do new game development in (restrained) C++

I had been harboring some suspicions that our big codebases might benefit from the application of some more of the various “modern” C++ design patterns, despite seeing other large game codebases suffer under them. I have since recanted that suspicion.


There are subsets of C++ that I just find too useful to pass up, particularly STL data structures like strings and vectors. Why shouldn't I use them? I'm just going to have to keep using them until I "blow my whole leg off" as they say.

I have to agree. I like STL + Boost a lot. I hope to try glib + C sometime. From what I see, that might allow me to C and still meet standard application needs.

Good point, glib offers a ton of useful functions: http://library.gnome.org/devel/glib/2.6/glib-String-Utility-...

As you can see, its no std::string, but it can still save you some time.

Interesting article. I do not have much experience in C++ but I am learning Objective-C. Do any of the sentiments mentioned in the article carry over to Objective-C? Or is that comparing apples and oranges? Is Objective-C a better extension/ superset (whatever the formal name) to C?

I would say that Objective-C in itself is a very simple object orientation layer on top of C. If you know C, you can pick up Objective-C in a matter of minutes. It is really simple. What makes Objective-C useful in OSX programming is the framework Cocoa. If you say, you do 'Objective-C programming' you mostly mean 'Cocoa programming'.

In contrast to that, it probably takes years to really understand everything C++ has to offer. And that is talking about the language only, no frameworks or libraries included. Hence, I would say that you can not really compare Obj-C/Cocoa to C++, since the former is a framework and the latter is a language. (This is assuming that most people mean 'Objective-C and Cocoa' when they say 'Obj-C')

If you want to compare Objective-C and C++ on a language level though, you are comparing two very different languages. Objective-C strives for simplicity and readability, which makes some things easy and some things harder. C++ strives for world domination. There is nothing you can't do with C++, but there are just so many things you can do that figuring out how to 'properly' do stuff can be kind of horrid. Really, there are so many different styles and dialects of C++ that you could use and freely intertwine that defining the language proper is a real challenge. Note that this is not necessarily a bad thing. C++ has a lot of strengths and is increadibly malleable for many different applications. But putting all that flexibility in one language certainly makes that language a truly complex beast, where even reading it can be a real challenge even after years of using it.

Personally, I am a sucker for simplicity and elegance and I would take Obj-C over C++ any day. On the other hand, I have seen some situations where Obj-C's message passing was just too slow for my application and I had to drop back to C function calls in some areas. Also, Garbage Collection and even Reference Counting have a certain performance penalty that might make Obj-C unsuitable for, say, embedded applications.

    There is nothing you can't do with C++
That's a tautology for any Turing-complete programming language. C++ may strive for world-domination, but I prefer my languages to be capable of introspection.

Complexity is not always directly proportional to the number of features supported.

    Complexity is not always directly proportional 
    to the number of features supported.
This is the difference between Turing complete and Gosling Complete. ;-)

I agree, Complex does not necessarily mean Complicated. In C++, it does.

They are very different languages with different objectives. Both of them started with the goal to provide OO to the C language because It was used for everything at that time.

C++ was designed with performance in mind. The main goal was to be as efficient as C and It means to not use any runtime.

Objective-C was designed to be source compatible with C and extends the language with a Smalltalk like syntax for OO. It uses a runtime which provides things like reflexion and GC.

Objective-C and C++ are very different apart from their C ancestry. C++ doesn't make an especially good Obj-C replacement, and Obj-C is probably no better at the situations where C++ excels than raw C. Objective-C is a higher-level, dynamic-ish, highly-OO extension to C. (great for GUIs, not so great for writing drivers) C++ is quite low level, not dynamic at all, and not overly OO. C++ is very expressive in many situations due to its generic and automatic resource management features, which Obj-C lacks completely. (GUI code is typically horrendous, driver writing is nicer than in C)

"(great for GUIs, not so great for writing drivers)"

Probably the most succinct description of the relative strengths of Objective C and C++. Objective C for GUIs, C++ for drivers and embedded development.

NeXTStep drivers were written in Objective-C using a framework called DriverKit, and this was on 90s machines.

Objective-C is just C; there is no inherent performance penalty other than the dynamic dispatch of messages. In situations where that would actually be an issue, you'd be avoiding object-oriented features in C++ as well.

However, driver performance bottlenecks are typically in I/O throughput, not the language the driver was compiled in. The fact OS X drivers are written in a C++ subset likely has to do with the demands of device manufacturers coming over from MacOS 9, just like the existence of Carbon was to appease application developers like Adobe.

"great for GUIs, not so great for writing drivers"

NeXTStep had DriverKit, a framework for writing drivers in Objective-C, and this was on 1990s-era PCs. Driver performance is I/O-bound and typically has little to do with the language the driver was compiled in.

Objective-C is a completely different approach to OO C than C++. Obj-C is basically straight C with a very Smalltalk-like OO layer and totally different flavor. Generally speaking I prefer C++ but Obj-C is a better choice if you need to do a lot of metaprogramming. The things you can do with message passing in Obj-C probably make it a better choice for building UIs, for example. Although I really miss the scope-managed resources of C++ when writing iOS apps.

The more dynamic nature of Objective C enables many of the design patterns used in Cocoa development. Delegation, key-value coding, binding targets and actions, the introspection behind Core Data, and many others.

Obviously, C++ being Turing complete these are all technically possible with C++. But in practice it would amount to "Greenspunning" the features of Objective C into C++.


the discussion going on here might be relevant: http://news.ycombinator.com/item?id=2221182

Previous discussion on this same article: http://news.ycombinator.com/item?id=885482

"I think the decision to be backwards-compatible with C is a fatal flaw." --Guy Steele

The funny thing is Objective-C is also backwards-compatible with C and there is a huge world of difference between programming in Objective-C and programming in C++.

While Objective-C is backwards-compatible with C, it doesn't tend to be as noticable as much as it is with C++. Objective-C feels like a markedly different language in which you can do inline C if you want to, while C++ feels like more of an extension of C.

It's not so much that it's backwards-compatible with C so much that it is C. It's plain C with a class and run-time messaging system built on it. Message sends are translated to C function calls by the compiler.

Why we're using C++? Because we're writing 3d graphics and memory heavy gui applications for the engineering world. In this regard the combination of Qt and OpenSceneGraph is quite nice.

I really can't understand, how someone can favour C for C++. Ok, C is the easier language, but the better abstraction capabilities of C++ outweighs this on a big code base.

>..better than C++ unless efficiency is a concern

Like other construction materials are better than steel unless strength and reliability are a concern.

I don't know, I enjoyed my time doing C++ work.

I think the key sentence is this: "I love to see how they do things because I think they don’t rely on it for the stuff that it’s not really that good at but totally use it as almost a metaprogramming language".

Using templates, it has by far the best metaprogramming abilities in a object oriented language. I haven't used Smalltalk, but I've done production code in C# and Java and whenever using them I always yearned to the metaprogramming abilities of C++.

Slowly but steadily I'm gaining ground on Lisp, so who knows, maybe lisp macros will satisfy this metaprogramming need in the near future :)

If you want to see what C with macros would look like, check out C-Amplify: http://www.cliki.net/c-amplify

The following statement made by the Author doesn't sound like a programmer (leave C++ expert alone).

"I think I once managed to read all the way through Stroustrup’s The C++ Programming Language and have looked at at least parts of The Design and Evolution of C++. But I have never done any serious programming in it."

All I can say is, any programmer worth his mettle would make such a statement... Going through Stroustrups's book isn't possible if you're not interested in C++ programming (given that its not part of For Dummies series), IMHO. It is exactly like comparing it to a Novel. A programming guide is not about reading, its about learning.

I'm not sure what you mean. Have you read Stroustrup's book? It was my first "practical" intro to object oriented programming, and I struggled with it.

I would describe it as Byzantine technology explained with an economy of words that approaches poetry, at least in the way you must ponder each sentence. And the exercises are stimulating, in the sense of a cattle prod. I enjoy that sort of thing, in reasonable doses.

Since then I've stayed away from C++ unless it's externally imposed, and built things with C and Python. I like a language whose basic features can fit in my small brain, a language that gets out of the way.

The author is Peter Siebel, a fairly well-known lisp hacker and author of Practical Common Lisp. While he's not a C++ programmer (and doesn't claim to be), you can't say he's not a real hacker.


I think the problem is in fact that nowadays developers jump into languagens like Python, Ruby or PHP because it's easy to learn, easy to install and run the environment and easy to code (you don't have to deal with memory or code optimization).

C/C++ are the base for everything you code today. People start learn how to code with languages like Java, jump straight into OOP. That's wrong.

Every developer should build at least a "hello world" in C, to know what's a real code. Ever developer must know about memory management, to scale optimize the code in any language. Other languages compilers do that for you, but you really need to know how it works.

I think the problem is in fact that nowadays developers jump into languagens like Python, Ruby or PHP

People have disliked C++ long before those languages existed or became popular. A lot of people who have written and maintain large C++ code bases will tell you straight up that they prefer C to C++.

    A lot of people who have written and maintain large C++ code bases will tell you straight up that they prefer C to C++.
Perhaps that's the problem right there. As someone who has written a lot of C++ for quite some time, the most nightmarish aspect tends to be working with antique codebases.

The way people program in it has changed drastically over the last ten or so years. It's possible to have a reasonably painless experience with modern C++ that is pretty much impossible with codebases with significant history.

A lot of that is because the compiler support simply wasn't mature enough, so people did what they had to do to ship. If all you're dealing with is old C++ code, it's extremely easy to find that you prefer coding in C rather than C++ (which is almost what that old code really is).

The true test of a language, IMO, is how it will rot over a decade or so.

I'm maintaining a C++ web application written 12 years ago. It's not pretty. I cannot describe the difficulties even an experience C++ developer would have in trying to understand and maintain it.

Fortunately modern C++ is pretty good. We've managed to tack on a legacy compatibility layer via the Boost libs to give us a Python wrapper. It's still running in production and is still pretty useable.

However, C/C++ still requires a fairly high cognitive load for a diminishing return, IMO. There are other languages that are easier to understand and maintain whose implementations are getting increasingly better at competing in areas where C/C++ used to be the go-to choice. There are simply fewer cases where I will reach for C++ these days where it used to be the only choice.

You're absolutely right, of course - a webapp in C++ is not something that I'd want to even volunteer to help maintain.

I'd just say that it's not the language that rots, but the libraries, and preexisting code. There are still some areas where C++ is a good choice, but for things like a webapp, there would have to be a really compelling reason to choose it.

And as someone noted earlier, a lot of people formed their opinion of C++ a decade ago. A decade ago C++ compilers generally sucked. Templates were usually broken, exceptions were usually broken, and the STL was generally broken.

EDG was the first front-end to really get solid. Then Visual C++ got its act together and started releasing highly standard comformant C++, and then GCC did.

Now if you're writing standard compliant C++ it will likely work on the big 3 front-ends.

With that said there are some complexities in C++, but fortunately you can largely be oblivious to it until you need to use them.

Crockford correctly points out that Javascript has a very solid core. C++ shares that. With that said, I still prefer not to program in it... Just not a fan of non-GCed languages for the most part.

I, for one, got fed up with C++ long before I ever heard of Python or Ruby. I pretty much quit programming because I got frustrated with the amount of boilerplate code I had to write/the bending over I'd have to do in order to get the simplest ideas fleshed out "idiomatically".

A language is a means of communication, and C++ fails miserably at that.

I pretty much quit programming because I got frustrated with the amount of boilerplate code I had to write/the bending over I'd have to do in order to get the simplest ideas fleshed out "idiomatically".

Don't take this the wrong way, but programming isn't for everyone. For some of us, it's in the blood. It's what I do during the work week and on the weekends for fun too.

Your complaints are valid of course, but take a look at all the systems developed for boilerplate elimination. I have used preprocessor metaprogramming, template metaprogramming, and "external Ruby script" metaprogramming - sometimes all in the same project.

But it's not everyone's cup of tea.

Some people swear by Python.

Oh, you misunderstood me - I used to be on just like you, for 10 years or so, before I quit. I must've written tens (hundreds?) of thousands of LOCs in C++, but I was gradually getting sick of it and eventually almost stopped completely.

I still code, but it's not what I do for living, and incidentally, I do enjoy using Python (and also Scheme)!

How did I know...

I really fell in love with the cleanliness and simplicity of Lua for a bit there. Used it to prototype stuff that I'm not rewriting in C++ with great pain. I sure would benefit from having a good debugger though.

    Every developer should build at least 
    a "hello world" in C, to know what's 
    a real code.
I'm not following. Are you saying that "Hello World" is emblematic of "real world" C code?

My opinion is that "Hello World" serves one purpose only in C, to show how much the C compiler agrees with the host OS.

That was just an example.

Developers should know what's a pointer and how to deal with it, because this is what happens everytime you hit the 'Build and Run' in compiler or run your Python script.

And even a hello world in C can show all of that.

And why should developer bother to know how to deal with pointers ?

And since when does writing "Hello World" in C teaches you how to use pointer and how to deal with it.

"And why should developer bother to know how to deal with pointers ?"

Because if you don't even know the most basic things like the stack and the heap and memory addressing and the mechanics of calling functions, how can you expect to be a "developer"? Fine, let people start learning with languages that abstract all of that away, but for a well-rounded, complete education as a programmer (I'm not talking school-eduction, I'm talking general education) you need to know about these things.

Perhaps he uses the term "developer" to include "web developer" or "suburban residential housing division developer".

I really enjoy C++. It's a "live and let live" language. Pick what you want to use and be happy. Sure, not everyone picks the same pieces to use, but like in everything, humans will have differences and C++ allows and encourages that.

You can do procedural, functional, OOP, generic, template meta-programming etc. It has something for everyone and does not try to force anything down your throat. That is why I love it.

I use it daily and am very productive with it.

Edit: grammar

> does not try to force anything down your throat.

It forces me to relinquish a host of assumptions about most programs. This is a huge source of bugs.

> I use it daily and am very productive with it.

Meaning, you write lots of code? Or you solve lots of problems that weren't introduced by C++ in the first place?

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact