Hacker News new | past | comments | ask | show | jobs | submit login
Why C++ sucks (2016 edition) (dorinlazar.ro)
126 points by ingve on Feb 21, 2016 | hide | past | favorite | 179 comments

I go a little deeper and more esoteric with my big gripe with C++... why can't it be parsed with a simple LALR(1) parser?

Donald Knuth summed this up much more elegantly than I could ever hope to do so, in his six character critique of C++:

a < b > c;

What the hell does that mean? What it means is that you cannot determine the meaning of those six characters without knowing in advance the types of a, b, and c. Is it an odd-but-valid C comparison? (a is less than b and b is greater than c) Or is it a template declaration? You cannot know these things without context.

Back when I studied compiler theory and was reading the Dragon Book, the simplicity, elegance, and precision of the LALR(1) parser design really touched me. And it was implemented on every Unix box with lex and yacc. This is how you write a compiler! This is how you understand structure precisely! To be syntactically valid, a statement must have one and only one meaning.

C++ cannot be parsed without interpreting variable types at parse time. LALR(1) does not work. It never has worked. C++ has been broken on this front since the 1980s, when it was just a preprocessor that cross-compiled to C. Why? What possible benefit is there to giving up that core bit of parser discipline?

"Because it's powerful!", someone sneers, as if that condescension justified bad design. "Because templates are so hard!" another whispers. But what about this?

a [< b >] c;

If the template declaration tokens were [< >] rather than < >, they wouldn't overload the comparison operators. The meaning of Knuth's example would be unambiguous, and both could be parsed without grief - both by machines, and by human eyeballs. This isn't difficulty. It's just not thinking through the consequences.

But that's okay. Smoke enough of that << and >> crack, and you'll soon forget those tokens are bitwise shift operators, not i/o. Sigh.

Of all the complaints I can think of regarding C++, the one I care least about is the difficulty in disambiguating the grammar... Languages exist to help humans, not compilers. Of course, C++ often fails in that regard as well.

Aside, a pox on the house of bottom up parsers. Anyone trying to produce useful semantic errors out of a bottom up parser understands how god awful they are to work with. There's a reason both GCC and clang use hand rolled recursive descent parsers...

My first professional exposure to C++ was porting code across every Unix platform available in the late 90s, using a bunch of proprietary compilers. It was hell. Nothing was consistent. And of course, there were no unit test suites, and the programmers thought that if it compiled on HP-UX 9, it'd compile anywhere. So I have a real attitude about how the parsers work.

I have yet to see a reasonable explanation why C++ needs a type-aware compiler to accomplish its goals. I can't think of any other languages with this problem.

"I have yet to see a reasonable explanation why C++ needs a type-aware compiler to accomplish its goals."

Are you saying strong typing is a problem with C++ as a language? Or am I misunderstanding?

You can't assign a category to the tokens in something like "T * p" unless you know whether "T" is a type or not. Suppose you have the following line of code:

    T *p;
It means one thing if it were preceded by this, making the line a variable declaration:

    struct T;
    typedef struct T T;
It means quite another if it were preceded by this, making the line an expression:

    int T=5,p=0;
(This particular problem also affects C.)

Why is this a problem? Is your gripe that * is used for declaring pointers as well as for multiplication?

The specific details aren't too important. (Another example: "(T)-1" - very different interpretations depending whether we previously had "typedef int T" or "int T=10"). The problem is that a line of code can't be understood, not even at the most basic level of dividing it up into appropriately-categorized tokens and arranging it an appropriate tree-like structure, without having previously interpreted (to some extent) the code that precedes it.

From a practical perspective, this makes the compiler code more complicated, because now the parsed code has to be analyzed and the results fed back to the loop. As well as the obvious negative effects of code that's more complicated, this also means any code analysis tools that work with source code have to themselves do this same work. (For C, this probably isn't too much of a difficulty, but it's still annoying having to keep track of typedef names and so on just so you can parse it! For C++, it's a very big problem indeed, on account of how much stuff you have to keep on top of - and that's why C++ source analysis tools basically never worked properly once you got past a certain level of complexity until people started just using the compiler to do it.)

I don't have an in-depth background in this stuff, so it's possible there are also more abstract benefits from having a grammar that doesn't suffer from this sort of problem.

You are probably talking about static typing, not strong. Strong typing implies that there are no implicit type conversions which is not true for C++.

Oh god not this again...

> Of all the complaints I can think of regarding C++, the one I care least about is the difficulty in disambiguating the grammar

Compile time speed? Refactoring? Jump-to-definition? Autocomplete? Intellisense? Those are all important for UX, and a tricky-to-parse grammar makes those things mighty challenging.

Visual Studio does an almost-perfect job of this, and what is missing is not because of the parsability of C++. What's your point?

Those are all examples of semantic analysis (which is also a problem with C++, but has little to do with the grammar). A tricky grammar is simply a parsing problem.

C++ compilation times are absolutely ridiculous, esp. when compared to something like D which has greater expressiveness in its compile-time features.

Semantic analysis is a post-parse task. Difficulty of constructing the parse tree has no impact on consumers of the parse tree.

There's a reason both GCC and clang use hand rolled recursive descent parsers...

Another thing about recursive descent, besides the ability to generate very specific error messages, is that it's a naturally intuitive algorithm which is very similar to how humans mentally parse. Perhaps that's also why context-sensitivity is not a problem at all, since natural languages are AFAIK all context-sensitive.

Parser generators are complex, generate huge amounts of virtually undebuggable code, and require learning another language, yet are less flexible than recursive-descent. It's clear why they've fallen out of favour in comparison to simpler and more flexible parsing algorithms.

Natural languages are reasonably close to context-free, actually. There are some freaky constructions in Dutch and (some variants of) German that can't be generated by a CFG. They sometimes get called mildly context-sensitive.

If a computer struggles to parse unambiguously, it doesn't bode well for mortals.

Depends. Have computers do what computers do well, have humans do what humans do well. Humans are intuitive. They can make correct decisions from incomplete, ambiguous, or even incorrect data. What humans suck at is precise, repeatable thinking - like parsing. Meanwhile, computers are great at generating repeatable results from consistent data very quickly, but absolutely cannot make decisions. Even conditional statements aren't actually "decisions".

So a language can be hard to parse unambiguously for machines, but it can be pretty straightforward for humans - like pretty much any natural spoken language.

Try insert "English" between "parse" and "unambiguously" and you can see how patently false your assertion is.

I had English in mind when I wrote that comment.


It has knock on effects, even now there aren't really any great refactoring tools for C++ like there are for C# or Java.

What does Resharper not do for C++ that you'd like it to do? I know there are some areas that aren't as well-covered as the C# version, I'm just wondering in what real-world those make a difference (my experience is relatively short so I'm probably missing things)

> Languages exist to help humans, not compilers.

Until you realize that there is rarely any proper tooling around the language, which would be really helpful to humans!

I can list clang and cppcheck on Linux, but nothing else so far. Some of the clang related tools still choke on some parts of C++ though, so even that isn't close.

While this is a legitimate complaint, it also applies (though in fewer ways) to C. How do you parse this?

Is that a call of function (or function pointer) `x` with argument `y`? Or is it a cast of `y` to type `x`? You need to have kept track of of all the typedefs in the code prior to that point to know.

Another one:

  x * y;
Is that a multiplication, whose result is being thrown away, or a declaration of a variable y of type pointer-to-x?

Good point. But keep in mind that C predates C++ by over 15 years. There were tremendous advances in theory in that time. It's not completely fair to compare a hacker's tool from 1970 with a state of the art academic exercise from 1987.

Early versions of C date to 1972, while early versions of C++ (called C with classes back then) date 1979, only 7 years later, probably a few rooms down the hall from where C was originally developed.

True, templates came much later, but by then C++ was already hopelessly unparsable by LALR(1) parsers. But in truth, it was this way from moment 0, since it was based on C-syntax. Making it parsable by yacc was a lost cause, since yacc followed C and not the other way around.

By itself it's almost certainly a function call with a discarded return. The result is not assigned to anything.

It depends on the context though. Code like

  vector<int> years;
  list<string> names;
is pretty easy to wrap your brain around.

> Is it an odd-but-valid C comparison? (a is less than b and b is greater than c)

That's not the C comparison, that's a Python chained comparison. A C comparison will parse this as `(a < b) > c`, and you'll get false for a=3, b=5 and c=1.

This is pretty much solvable with a preprocessor and some heuristics and you get to keep your LALR parser

Also IMHO a<b>c should not exist as a comparison. Does this mean a<b AND b>c? Then spell it out, it's a rare case and it doesn't contribute to readability (as opposed to a<b<c for example)

> Also IMHO a<b>c should not exist as a comparison. Does this mean a<b AND b>c? Then spell it out, it's a rare case and it doesn't contribute to readability (as opposed to a<b<c for example)

Both are probably broken anyway. The first one will compute as (a < b) > c, and the second one as (a < b) < c. Unless you're using Python which has chained comparison, which has its own pitfalls e.g. `a < b == c > d` may not do what you want (it's equivalent to `a < b and b == c and c > d`, not `(a < b) == (c > d)`)

I'm sorry, but from the point of view of technical merit these opinion based rants are starting to feel really disingenious.

Scott Meyers has written lots of good stuff on how to use C++ properly.

Yes, C++ is the rocket-octopus in a razor armour, equipped with automatically foot targeting shotguns and all the rope to hang every developer twice.

Yes it the best tool everywhere? No. Is it my favorite language? No. Is it absolutely essential for value producing programmers around the globe? Yes.

The thing is, the rocket-octopus infestation is so wide that we've learned to use it to our advantage (after a few missing foots but hey - rockets!) and understood how to deal with it's kinks and warts. We've got pretty good tools to wrangle it (Visual Studio is actually pretty good, IMO), and boy, can it go places.

The ecosystem is the main reason it's as alive today as it is. I would much rather take my iridescent paradise bird F# to work, but the sad thing is, the poor thing does not survive yet everywhere where the rocket octopus does. My pet python is much more well behaved but unfortunately it still does not have hassle free rockets nor can it survive in high performance scenarios with even more wrangling than the octopus does.

You know what? I'm going to take the controversial opinion that C++ is both the best tool for everywhere and my favorite language.

C++ really has only one downside. It's hard to learn. Almost all complaints about it really boil down to not knowing the language well enough.

But so what? Because once you know it well...

Once you know C++ really well, you understand what it means to have no compromise.

It has the best performance and the highest level abstractions in one package.

It has both amazing safeguards against programmer error and the ability to break all of them if you need to.

It has great portability while still giving you full access to all he platform specific features.

You can feel just as at home writing a web browser or a website in C++.

I could go on.

You get all this good stuff and really it only really takes a few years of seriously using the language to get that good. What is a few years in a 40 year career?

Also, don't believe the bullshit of "nobody really knows C++" that a lot of people claim. You can find dark corners of any language. The reason they are dark corners is because they don't come up in real code.

Hm, no, it sucks. "I can do that with ease because I've done it so many times" is really a terrible argument to explain why something is good. If you've lost a leg, walking with a crutch becomes second nature to you, but that doesn't mean that's the preferred or most efficient way of walking for the human being.

Yes, you know by heart that all_of requires "foo.begin()" and "foo.end()" and the capture context "[]", but it doesn't negate the fact that many other, better languages, just know how to cycle an array without having to specify begin and end, know how to capture arguments when needed, don't require a useless (in this context) return, nor clutter a single line of code with nine differenct gibberish symbols like "(", ")", "::", ",", "[]", ";" "{", "}".

Take the ultra humble javascript: foo.every(a => a % 2)

When you need three times the characters to obtain the same result, and those charcters are a clusterfuck, it is not relevant if years of usage made you comfortable nonetheless with it.

> know how to capture arguments when needed

Not to take away from your other criticisms, but in most languages how the arguments are captured has no relevance (in Javascript — and most every other managed language — the environment is always captured by reference and the GC handles lifetimes of "whatever"), whereas it is very relevant and important for C++. So I think that part of the criticism is unwarranted.

Though Rust didn't go to the same lengths[0] you still have to specify whether the environment is captured by reference (the default) or by value (`move` lambdas), because that can have a large and semantic impact on the system.

[0] see https://www.reddit.com/r/rust/comments/46w4g4/what_is_rusts_... for the impressions of a C++ developers wrt what it sees as corners cut, and replies by rust devs showing these corners turned out to be mostly unnecessary in rust

i think having known bounds would make arrays a non-zero cost abstraction.

> C++ really has only one downside. It's hard to learn. Almost all complaints about it really boil down to not knowing the language well enough.

Did you read the link and the examples shown (std::chrono one, for example)

Yeah, in the time I go around trying to solve C++ self-inflicted tangles I can learn Go or code stuff in Python (or even Java/C#) which 99% of the cases is fast enough

I used to advocate for complex stuff, but now I think differently. Solve my problem, I don't care about religious issues.

I don't think the std::chrono example is fair at all.

In the C++ case, you specify which clock you want. What kind of clock is DateTime.UtcNow()? Is it monotonic? What's the resolution? It's maybe in the documentation, but if the author refuses to remember simple syntax, surely he won't want to remember such details either.

The author also chose an example where he abuses the default behavior in the C# case. The last line in the C++ example just vanishes if you use the default behavior in both cases.

Also std chrono is very strict with types: time points and durations can't be freely mixed, milliseconds and microseconds have different types [1] and you can't implicitly convert from a float based time to an integer based time.

Does C# do the same thing?

[1] the underlying type actually encodes the exponent as a non-type template parameter. Converting from ms to us correctly applies the multiplicative factor.

> Does C# do the same thing?

To some extent. Points in time (DateTime) and durations (TimeSpan) cannot be mixed either, although you can subtract two DateTimes and get a TimeSpan, or add a TimeSpan to a DateTime. But I guess those work in C++ just as well.

Internally they are all based on ticks, which are 100-ns intervals (in DateTime since 0001-01-01) stored as a signed 64-bit integer. Different quantities of time don't need different types because there's only one type that represents a duration.

But with any complex interface, wouldn't it be a good idea to introduce a less complex facade that only deals with common usecases, and for complex use cases use that underlying interface?

The whole point of the complexity chrono interface is to prevent mistakes caused by accidentally mixing different units. Providing a default unsafe interface would completely defeat the point.

That an interesting point of view to take in defence of C++ of all things.

Sorry, I might misparsing your replay, but are you saying that std::chrono is overly strict? Or that because C++ (and its C heritage) has allowed unsafe implicit conversions in the past, it should continue doing it in the future?

Just that it feels strange to see safe defaults/interfaces used to argue for C++, nothing more.

"I can learn Go or code stuff in Python (or even Java/C#) which 99% of the cases is fast enough I used to advocate for complex stuff, but now I think differently. Solve my problem, I don't care about religious issues."

This sounds like it's written in the hobbyist/lone tinkerer/ deploying to self managed server context.

When discussing could anything replace C++ one is usually in the context of a massive business environment pending on millions of lines of C++ and native bindings, not figuring out how to write a handy personal tool.

For handy personal tools, Python is probably much better than C++ 99% of time, but when discussing, how to replace a language in an entrenched ecological slot with a huge number of interdependencies things are really not that simple at all.

> Python and Go

> 99% of the cases is fast enough

This claim is unsubstantiated hyperbole.

how dare they.

Right, bro? That's what I'm saying. Misinformation is no bueno

> C++ really has only one downside. It's hard to learn

I don't think the article lists just one downside. For example It's also undeniably hard to parse, making it harder than necessary impossible to implement efficient refactoring tools and other parser dependent tools, for example.

> It has both amazing safeguards against programmer error and the ability to break all of them if you need to.

C++ has two main design constraints -- it comes from C and its abstractions should cost nothing. Those two have enormous benefits and drawbacks. Performance is one thing. Other languages that don't have those design goals/constraints have different benefits and drawbacks. It's easier to make a small shell util or in perl/python than in C++. It's easier to make a formally verified system in a functional lang than in C++ and so on. "The best tool for everywhere" meaning "if you need to use the same language to solve all N problems then C++ is best of the major and widely used languages" might be true, but it's also a pretty poor metric.

> For example It's also undeniably hard to parse, making it harder than necessary impossible to implement efficient refactoring tools and other parser dependent tools, for example.


I'm curious, what languages have better support for tooling than C++?

Java is an obvious example.

I can think of several areas where Java support for tooling is unobviously superior when compared to C++.

The SDK for the Arduino is good example, so is the Unreal game engine, the tooling support in Xcode for iPhone Apps, the support for static analysis via Clang, the auto formatting support via clang-format, the support in Windows eith Visual Studio, the support for autocompletion in UNIX text editors such as Vim and Emacs with the YouCompleteMe daemon and client (which leverages clang).

So if Java is an example of superior tooling, it's not really an "obvious" one

perhaps "unobvious" to you, but that's something which is empirically (if subjectively) resolvable, no?

My bro, it's truly an honor to have my first HN comment stalker. I feel like I finally made it

a coincidence, but whatever.

I completely agree. I very much enjoy working in it, even after 2 years of exclusively Clojure development, I had no problems coming back to modern C++ with all its lovely functional tools and enjoy every moment of it.

> The reason they are dark corners is because they don't come up in real code.

Thank you!

It's amazing how much FUD is spread using examples that would never pass code review! (or even be submitted to code review unless you were playing a prank on your coworkers).

> It has the best performance and the highest level abstractions in one package.

The highest level abstractions, are you kidding me?

I am currently writing a little interpreter in C++. I would have used Ocaml, but the rest of the project is in C++. Well, just try to write an abstract syntax tree in C++. In languages such as ML and Haskell, this is easy:

  type expression = Litteral    of value
                  | Symbol      of string
                  | Funcall     of expression * expression
                  | Conditional of expression * expression * expression
                  | While_loop  of expression * expression
                  | Sequence    of expression list
And I'm pretty much done. Now in C++:

  #ifndef __Expression_h__
  #define __Expression_h__
  #include <cstdint>
  #include <string>
  #include "Value.h"
  namespace Script {
  class Expression;
  class Expression {
      enum Tag {
      union Val {
          Value                   *litteral;
          std::string             *symbol; // or a true symbol?
          std::vector<Expression> *composite;
      Expression(const Value&);
      Expression(const std::string&);
      Expression(Tag, std::vector<Expression>*); // takes ownership
      Tag _tag;
      Val _val;
  } // namespace Script
  #endif // __Expression_h__
Don't forget the cpp file either:

  #include "Expression.h"
  #include <cstdio>

  using namespace Script;
  Expression::Expression(const Value& val)
      : _tag(litteral)
      _val.litteral = new Value(val);
  Expression::Expression(const std::string& sym)
      : _tag(symbol)
      _val.symbol = new std::string(sym);
  Expression::Expression(Tag tag, std::vector<Expression>* exprs)
      if ((tag == funcall     && exprs->size() == 2) ||
          (tag == conditional && exprs->size() == 3) ||
          (tag == while_loop  && exprs->size() == 2) ||
          tag == sequence) {
          _tag = tag;
          _val.composite = exprs;
      } else {
          std::fprintf(stderr, "Expression constructor: Invalid Expression\n");
      switch (_tag) {
      case litteral   :
          delete _val.litteral;
      case symbol     :
          delete _val.symbol;
      case funcall    :
      case conditional:
      case while_loop :
      case sequence  :
          delete _val.composite;
          std::fprintf(stderr, "Expression destructor: broken class invariant\n");
I may use smart pointers instead of doing my own RAII by hand, but that's still a hassle. Some may point out that my ugly unsafe union type is not the way to to this, I should use a class hierarchy and polymorphism. But just imagine the sheer amount of code I'd have to write.

C++ is a low level language. It's high level pretense is nothing but a thin veneer that cracks under the slightest scratch. Or, someone show me how to implement an abstract syntax tree in less than 20 lines of code. I'm lenient: Haskell and ML only need 6.

(Yes, I'm using Sum types, a feature that's pretty much unique to ML languages. But no, that's not cheating: providing enumeration and unions but somehow failing to combine them into sum types is a serious oversight.)

I think you're deliberately making it more verbose than it needs to be. The class definition could be condensed into:

    class Expression {
      enum Tag { litteral, symbol, funcall, conditional,
       while_loop, sequence } _tag;
      union {
       Value *litteral;
       std::string *symbol; // or a true symbol?
       std::vector<Expression> *composite;
      } _val;
And you could probably use templates or macros to autogenerate most of those constructors/destructors automatically.

> I think you're deliberately making it more verbose than it needs to be.

I swear I am not. This is will be production code, for which I earn my salary. Your code is terser, but still fundamentally the same as mine. It's still unsafe (we could access the wrong union member), and it still has to work around incomplete definitions and the lack of trivial constructors (I wouldn't use pointers otherwise).

Automating my constructors and destructors would be amazing. Unfortunately, I don't know how to do it —not without writing over a hundred lines of cryptic boilerplate first.

The ML example was missing the second half of the ML niceness - efficient pattern matching at syntax levels which makes for a really nice and understandable code, especially when writing complex compound clauses.

"...could probably use templates or macros to autogenerate.."

... which kind of highlights that ML example even more. In Ocaml or F# that original definition would have been more or less enough to start writing nice, efficient parsing code.

The sum types are not the only thing, IMO, that ML-family has going for it. As for example F# contains pairs and lists as syntax level constructs (rather than library add-ons) lot of code can be written at a really high level before optimization - in the same language - if it's needed.

With Boost Spirit you can write expressions akin to EBNF in your C++ code. The downsides are compile times (improved in the new version) and difficult to parse error messages at compile time.

If you stick to your code you should at least:

* add a copy constructor or delete it explicitly

* pass in the vector as unique_ptr to the constructor. This way the "takes ownership" will be clear without the need of a comment.

* add your missing includes e.g. vector

* stop using _ as variable prefix, it's forbidden

* throw from the constructor, don't exit

Boost variant + boost vector will allow you to eliminate all the manual memory management (boost vector allows incomplete types).

Thanks for the tips, I'll apply them. Just a couple questions:

* Do you have a link about the _ (underscore) being forbidden? Our current coding rule make us use them right now.

* Is there a standard exception object I could throw? If I could just write something like `throw std::invalid_argument("what String");` that would be terrific. I don't want to construct a whole class just for that. (Maybe I have to, though?)

I'll also seriously consider boost. While it's a huge dependency, we are on Debian GNU/Linux, so it's probably not that bad.

You're welcome. The underscore thing is mentioned in Global names, however it's more complex than what I said initially (this is C++ after all):

* double underscore or underscore + capital letter is reserved in any scope

* underscore is reserved in global scope

Your code is allowed in fact, sorry. My personal rule is that I never start an identifier with an underscore anywhere.

I missed the exc question: yes, invalid_argument is perfectly acceptable in my opinion. However, if you want to inherit from invalid_arg that is also possible - this would allow you to catch only this very specific type of error.

You could consider using any of the already existing sum type libraries like boost::variant.

The code will certainly be more verbose than ML, but still better than roll-your-own.

And manage another significant dependency. If boost::variant was as nice as ML, I would do it. But it's not. I'm not sure this is worth the trouble.

If you are already using boost, as many projects are, it is no extra dependency. Otherwise there are many standalone implementations.

Egg.Variant is an high quality one: https://github.com/eggs-cpp/variant .

We are likely going to get a std variant type in the next standard, either as a library component or as a language feature.

c++17 might include a variant template which would replace all that (or at least the backing storage for it). (std::variant<Value, std::string, std::vector<Expression *>>)

So the C++ way is either to write a code generator for your own DSL, or to write a library that would allow to write you grammar in an eDSL fashion. No pain, no gain ;)

> Yes, I'm using Sum types, a feature that's pretty much unique to ML languages.

That's a bit too restrictive. Rust, Swift or Kotlin are not MLs.

Good point. But we should let them age a bit. Right now they're a bit young for production code with deadlines and all. I hate C++, but I like the certainty that comes with tools I know.

Why do you need to use pointers for string and vector? Aren't they already heap allocated internally?

My code compiles as is, so you can try yourself if you want.

If I didn't use pointers, I would have 2 errors. One would be about the use of an incomplete data type (Expression), and the other about trying to put something that doesn't have a "trivial constructor" in a union (String, Value, and std::vector<Expression>).

I can't use references either, because I would have to initialise them in the initialisation list of my constructor, which I can't do until I know the tag. I could still initialise the tag first, but that's specified by the order in which the data members are defined, rather than the order of the initialisation list —a rather dangerous idiom to rely on, should anyone modify the code.

Note that in C++11 is perfectly fine to put a type with a non trivial constructor in an union.

You are right about the issue with std::vector and incomplete types. It will be possibly resolved in the next standard. In the meantime you can use something like boost::container::vector which explicitly support incomplete types.

It's extra work, but you could use std::aligned_union<1, std::string, std::vector<Expression>, ...> to declare space for the union and then use placement new/manual destructor calls to initialize and destroy.

well, you would, you're in games. fairly substantial amount of stockholm syndrome in game programmers, imo. ;-)

bjarne and others in the community have talked about the heavy cost of legacy and compatibility many times. it's okay to like the language but acknowledge that it's not as awesome as it theoretically could be. it's okay to be unhappy with the warts and weirdness -- after all, such things are what the committee needs to be mindful of when progressing the language.

there's no real reason a language can't outright replace c++ without having many (most? all?) of it's idiosyncrasies. rust is a decent attempt, though the memory safety ideology seems too high friction to properly replace c++. there might be others who give it a shot someday.

Opinion-based rants? No, ignorance-based rants, with a level of malevolence that doesn't appear in good faith.

From claiming that "if you make all header libraries you’ll have a lot of copied code – the binaries will be larger", to blaming IDEs for the opaqueness of Cmake and Autotools, from deploring namespaces because "the programmers will end up doing a using namespace" to comparing a trivial std::chrono::duration_cast code example from cppreference.com to a C# example that does much less, it's one of the worst pieces of FUD I've ever read.

The only valid point is C++ IDEs not being too good, but even that has deeper reasons than Visual Studio rightfully disliking incompatibly compiled libraries or "it sucks as an IDE".

Yep, my point as well.

For example, right now is the only native language available in all mobile SDKs.

So unless one wants to add extra layers to their tools for writing portable native code, with the consequent increase in development costs, C++ is the only game in town.

For example, Xamarin is great, but now on top of debugging Java, Swift, Objective-C, C#, one also needs to debug their glue code into the platform APIs.

I just picked Xamarin as an example of extra layers, I think they are great.

    > Is it absolutely essential for value producing programmers
    > around the globe? Yes.
This is dubious. Care to elaborate what you mean by that?

There is, of course, huge amount of effort needed to support existing critical software written in C++, but there's never (I'm using "never" rather than "rarely" very deliberately here) a need to start a new project in C++. Almost any alternative option is better.

>Almost any alternative option is better.

This is a naive comment. C++ is a very good option for a wide range of scenarios where trying to shoehorn some other trendy language in its place would be at least the same level of headaches to just learning proper C++ development. And you can also depend on C++ stability and future availability, which also makes choosing its latest high-level constructs a wise option over other new languages that compete to also offer new idioms to the programmer. C++ isn't perfect, but neither is any other language. If you really know C++, it is a logical choice for many new projects across many platforms and environments.

Well, please tell me what language will be good for developing a specialized image processing system. Considering all good libraries are all C++ and maybe have Python bindings. deterministic destruction + templates + low-level memory management + the ability to ascend to full java-like OOP with interfaces really do make it my favorite language.

Yes it has weird syntax and ugly default behavior, but that's a price for backward compatibility.

"This is dubious. Care to elaborate what you mean by that?"

I meant mainly the existing status quo as you elaborated in the following sentence.

"there's never ... a need to start a new project in C++. Almost any alternative option is better."

I agree that for a lot of things C++ is the wrong language. For some use cases, such as libraries supporting real time graphics on multiple platforms, I'm not yet sure if yet there are better options (well, there is C but I'm pretty sure that arguing which is better of those is a matter of taste and experience).

Since new C++ projects are being started today, reality contradicts your beliefs. Which one will win, I wonder - reality or dogma?

That something is being done today doesn't imply that it needs to be done today.

I expect the companies that are starting new C++ projects today to be put out of business by competitors using better languages.

"I expect the companies that are starting new C++ projects today to be put out of business by competitors using better languages."

I don't think that's how software business works. I'm pretty confident there are niches out there where even COBOL would be a just fine implementation language if it provided end value for the user.

Domain understanding, network effects and marketing seem to trump bleeding edge most of the time.

That's not to say there are no situations where selecting a better language would lead to far better outcomes if there is time pressure and the language is viable choice with the technical constraints of the project. I'm just highly skeptical that this is a rule that works in general.

For an extreme example where this rule starts to look dubious, embedded systems - are there any better languages even available?

Many embedded systems support Java or higher-level languages. OCaml might be an option; Ada or Rust certainly would be.

The problem with all of these, with the exception of Rust and Ada, is that it's hard to do things like, get at pointers directly, for example. For all of the abstractions OCaml and Java bring, there is an often large performance penalty to pay, and when you're pass producing an embedded product, that's money you're leaving on the table because you're asking for more compute power than you actually need to get the job done. Ada is usually a pretty good choice, but it's hard to find developers outside of the defense industry; Rust on the other hand is often view (perhaps rightfully so) as still experimental and unproven.

> For all of the abstractions OCaml and Java bring, there is an often large performance penalty to pay, and when you're pass producing an embedded product, that's money you're leaving on the table because you're asking for more compute power than you actually need to get the job done.

In my experience the performance differences are often grossly overestimated (particularly if you compare equivalent development times). And increased development time or higher defect rates aren't free. What you say is true for a particular niche, but I think that niche is pretty narrow (and in that niche I think the risks of Rust, while real, are smaller than the risks of C++).

Alternative explanation: things are done a certain way due to technical constraints and market forces which you don't know about or choose to deliberately ignore.

Entirely possible. But this is not "reality vs dogma"; it's one person's (or group of people's) judgement vs another.

At least in the storage space (which gets essentially no press on HN), I have come to the finding that the vast majority of core product codebases are C/C++. For the niche they fill, it seems really unlikely that even the still experimental Rust would replace them here.

Dropbox has re-written their storage layer in Rust https://www.reddit.com/r/programming/comments/3w8dgn/announc...

That said, they're the only one I know of, so your point still stands, generally.

> I'm using "never" rather than "rarely" very deliberately here

True, we could also have used FORTRAN.

> Scott Meyers has written lots of good stuff on how to use C++ properly.

Scott Meyers "retired" from C++ in December 2015 (http://scottmeyers.blogspot.fr/2015/12/good-to-go.html / https://en.wikipedia.org/wiki/Scott_Meyers). There may be a link to C++ itself don't you think?

PS: I still write most of my projects in modern C++, but slowly switching to D and other alternatives.

That post is less "I'm getting away from C++ because it's a poor language" and more "I'm retiring from my role as C++ advocate because there are plenty of quality resources available".

Regarding using 3rd party libraries: you don't simply "reference a library". You include their source code into your project, and compile them together.

Ideally, both your project and the referenced library should be using CMake, and the library's CMake file should be referenced from your project's CMake file. If the library doesn't use CMake, you write a CMake file for it.

This is the only approach that worked well for me on all platforms. So, save yourself the trouble and just use CMake. Yes, I know it has a horrible language, but it's like the PHP of build systems: it works and lots of existing libraries use it, making it easier to use them in your own project.

Yap pretty much. I used C++ for a while then switched to C and Python and other languages for a decade then had to do some C++ for a bit again. And I was desperately trying to properly reference and link against some libraries. And then finally just dropped the source tree in my project and everything worked. It felt so dirty.

Git submodules work pretty great for this IMO. Rather than incorporate a snapshot of the source, pick a commit hash. Still frozen at a point in time, but you get to move which one you're using (or point to your own fork!) whenever you like.

With Cmake one can keep around an archive of the library which is then unarchived, compiled and linked into the project. Cleaner and quite easy too.

Edit: this is done with ExternalProject.

So don't use it? These "Why X Sucks" posts mostly seem like clickbait rants an author writes when they have a frustrating programming day. C++ is a huge language used all over the place, it's been here awhile, its pitfalls and warts are known, it can be cumbersome to write, but it's fast and there are a huge number of quality libraries for it. It's hard to write in. We know. Use it where it makes sense, use something else where it doesn't.

If you want native code with fast speed and a nice development environment, give Rust a try if you haven't already.

>> give Rust a try

Or Pascal. Combined with Lazarus, we have a nice open source Delphi-like environment. Really cool, eh?

Rust might be an option for C++ folks who likes functional programming :D

> I had an array of widgets. Those widgets had textures. Those textures were copied over and over and over again, because containers can do that, so I had to go back to rethinking the storage class for the widgets.

Actually, he didn't have to rethink it, and many people don't - instead, their code just copies things. Avoiding copying increases the chance for mishandling ownership and deleting a still-used object (or you can use shared_ptr, an then at some point the reference counting involved might get slower than the overhead of gc.) Beating gc in terms of memory footprint and speed without adding bugs is possible, but not necessarily trivial - and it's often very easy to lose to gc instead.

There are many good reasons why no language ever copied C++'s approach to "values" and "references", where every type can be passed either by value or by reference and redefine copying, assignment and now "move semantics." The proliferation of unnecessary copying is one of those reasons.

Perhaps he should have put the textures in a central container and have the widgets use a reference to the texture item?

I would think that adding textures as a child of the widgets would imply that he WANTED the textures copying. I wouldn't put child items into an object unless I wanted them copying too, as their lifetime is tied up with their parent object. The problem is not containers - it is more with his design and him wanting to control the lifetimes of child textures.

Well, define "problem." Naive, correct C++ code will copy objects because it's the simplest way to make sure each object has one obvious owner and won't leak without having to write memory management code or fiddling with smart pointers, which have non-trivial semantics. Yeah, you can avoid the copying, but C++ definitely encourages it in the sense of making it the path of the least resistance.

I think the failings in this article stems from a misunderstanding on how to use C++ correctly, and just compares it to a language the author is more familiar with, C#. C++ most definitely has a lot of faults, but at the same time, I've seen and worked with some truly awesome code bases using C++. And the above statement can be used for nearly every language that rapidly evolves (C++ pre 2011 was... well, a pain in the a to use).

All in all, I do wish these articles wouldn't make it to the front page of HN, because they don't really add much value to the community, but simply acts as a negative rant, not adding anything constructive.

> All in all, I do wish these articles wouldn't make it to the front page of HN, because they don't really add much value to the community

In my opinion the positive value provided by [legacy tool X is really bad] is if you read it like [X is being used not for technical merit but just because of momentum and the fact that we have grown comfortable with it and we really should be adopting alternatives Y or Z at a faster rate than we do]. Titles of articles are often "X is bad" or "X considered harmful" or some other inflammatory clickbait title. That's unfortunate, but it's how the web works.

If just a handful of people read these posts, nod, and then take a look at Rust (for example) then that's value. Yesterday X was git, on saturday X was glibc, today X is C++ and so on.

I think the discussion such an article generates has value. I've learned about some nice C++ tools in this thread that I may not have otherwise.

As a C++ dev for 10+ years, I wanted to hate this article and say it was all wrong but a lot of what makes me efficient with C++ now is avoiding the problems that the article publicizes.

However, I do believe that every language has its own problems at least as severe as C++.

In support, I would mention the following:

[1] Mike Acton's talk: https://www.youtube.com/watch?v=rX0ItVEVjHc

[2] ZeroMQ article: http://250bpm.com/blog:4

[3] Cat-V harmful page: http://harmful.cat-v.org/software/c++/

Yeah C++ is annoying at the beginning... but just dedicate yourself for 10-15 years, it'll become a piece of cake!

And C++{n+15} would have tons of new fun features so you can enjoy learning them in the next 15 years!

Yeah right, and after 10-15 years you'll be able to perform tasks without a problem that a 2-year experienced programmer can perform in a scripting language if (s)he wants to be expressive, or C if (s)he wants to achieve performance.

"Clever solutions give rise to artificial problems that can require even more cleverness to solve".

Not to mention the mad scientists sitting in ISO conference rooms constantly thinking "how can we top our previous clevernesses?". So good luck with the language being the same in 10-15 years.

These kinds of ridiculous statements poison the discussion and make it devolve into a complete mess. Stop that!

1) C is very difficult to use correctly even with the best guidelines and intentions. My best bet for avoiding memory errors is being super-careful (read: super-slow) AND using static analysis AND using Valgrind to check afterwards. The last two steps are mandatory.

Contrast this with C++, where just by using smart pointers, vector and array I can eliminate a whole bunch of typical C errors.

2) Scripting languages solve different problems than C++. The typical C++ tasks can't be performed by your programmer with 2 years of experience no more than they could by the same programmer if they had 40 years of experience.

Static analysis and Valgrind are wonderful.

This being said, I think I was six months into using 'C' when I mostly stopped having memory errors. Mainly, you construct the operations against memory objects based on building up series of constraints. This is less trouble than it sounds.

You learn to conform to the expectations of the language and libraries. And you use less of 'C' than there is to use. And when things get past a certain level of complexity, I tend towards making them state machines.

Should you have to do that? I have no idea. I do agree that based on most other people's 'C' code, it does appear to be painful for a great many people.

Do you have some example? I can't really picture the series of constraints that you mentioned...

RAII type things are part of it. The rest is simply making sure all the constraints are met. The fewer operational constraints, the better.

For text I/O parsing, just be semi formal about it at least. Check your indices, be careful of integer overflow and be judicious in the use of floating point. Use block I/O instead of the finer-grained things in stdio.h .

Have instrumentation built in that you can enable to capture test vectors ( if you have the resources ). Error counters can tell you a lot.

the mechanism of choice for managing complexity tends towards state machines and sometimes message passing.

That's a start.

Probably manual RAII. That seems to be where most stable C codebases end up, with a simple object model.

Are you speaking about libraries or the language itself?

Please detail what in C++ would take 10-15 years to be able to do, that would be possible in 2 years in other languages.

This is more of a "why C++ programmers suck", but the main gripe I have is with programmers who seem to be obsessed with stuffing as much of the latest C++20 (or whatever they're at now) features as possible into the code they write, making it more verbose and bloated than necessary. The fact that these features now exist, somehow makes people want to find a use for them. The same phenomenon doesn't happen with C because the language is so much simpler.

and today the first thing I suggest to people is to stop thinking that C++ is a superset of C

I think that's the problem. It's certainly possible to use its features sparingly, as an object-oriented superset of C, resulting in the code as performant as C but less verbose, which is IMHO the main attraction of the language. Going crazy with the abstraction is what turns it into a mess. (Compare the similar "use as much as possible, and maybe more" behaviour often encountered in the past with design patterns, and the huge OOP-ifying of everything when OOP just started becoming popular. Maybe in a few years C++ will settle down with most code being written in a saner subset, with only sparing use of the advanced stuff...)

I thought that the latest C++11-onwards additions made code less verbose because I don't have to write so much code as the STL has a LOT more in it? And don't have to write so many explicit constructors, destructors, move operators, assignment operators etc.?

The code is much simpler when I look at my C++11 code compared to C++03.

On a more serious note, just take the Google coding style and strike out everything it says about exceptions. Boom! A relatively painless C++. You don't have to use every trick in the book.

As a long time C++ programmer C# felt ... it felt good. Yeah, the memory and the non-native binaries make it a non-starter for most things I do. But wow, I loved it.

In the early nineties I found myself faced with spending a year learning to program in C++ with Microsoft foundation classes or giving up on Windows programming. After ten years started sh*tcoding in C#. I feel like I missed out on a lot of pointless suffering.

Yes, Win32 API/MFC was a frustration, but then I've discovered Delphi/VCL. I never liked Pascal so I was really happy I've found about Borland C++ Builder. I did some N-tier apps with it, it was a pleasure to work.

Didn't Borland suffer a massive brain-drain and all the developers left after the C++ Architect mismanagement? The VCL is full of bugs isn't it? And suffers horrible redraw and flicker the last time I used it (years and years ago!)

Quoting author's comments:

> If the Modules standard is adopted, half of the issues are gone, I think.

Agreed. Had there been a sane module system + package/dependency manager + CLI/build tool like almost every "modern" language does, C++ would be much, much more usable.

"The split between source and headers, which makes project management quite slow. If you make all header libraries you’ll have a lot of copied code – the binaries will be larger. You’ll have to recompile that code every time you want to use it."

You can't say "the split between source and header sucks" just because it's painfull to use it wrong ("all header libraries") !

(All header libraries are justified when you're dealing with highly templated stuff, but in this case you have no way to avoid recompiling this code every time you use it, as each required template instance needs to be generated anyway - smart compilers will get rid of duplicated instanciations).

The split between source and headers is a key to good encapsulation. From a software design point of view, it's good way to break long dependency chains.

I agree with your sentiment, but I think the situation is a bit more nuanced.

The usual argument is that headers don't actually provide good encapsulation: implementation details internal to the class (i.e. private members) end up getting leaked into the public header. There are workarounds (pimpl, opaque "handle" types, etc.) but they all have their own disadvantages (mostly an extra layer of indirection).

Complicating the matter is the issue of static polymorphism--CRTP and its friends cause the template-ization forcing lots of code into headers. Smarter compilers help a bit (with de-virtualization and constexpr), but if you want to guarantee that something is resolved at compile time, turning it into a template is the only real option.

Lastly there's the inlining specter; the separate compilation model means that without link-time optimization (which has admittedly made great strides in the last few years), code must be moved into headers to be eligible for inlining. Premature optimization and all that, but this is a death-by-a-thousand-cuts situation where the language gets in the way of doing the performant thing (and you're probably only using C++ if you care about performance in one aspect or another).

None of these things is an insurmountable problem, to be sure, but they represent little inefficiencies (either for the programmer or the program) which are not generally present in more modern language implementations. I am thinking primarily of Rust, for which the static polymorphism and link time optimization stories are strong--although perhaps that is in direct reaction to some of these C++ shortcomings, and we'll discover Rust has warts of its own after a few decades of wear and tear.

Thanks for your answer.

Call me old-fashioned, but I actually think the separate compilation model is a good thing: it makes it possible to delegate the build process to a mostly language agnostic tool, and to mix languages easily (C, C++, D) in the same project.

About the implementation details internal to the class: the 'private' keyword is the problem, not the header files.

If you want polymorphism, why not just use an abstract base class - as you would have had a layer of indirection anyway. If you don't want polymorphism, just use "handle" types. Am I missing something?

Spoken like a true c++ fanatic. Good encapsulation does not require splitting things up in different files that are later conditionally merged back together by a dumb non language aware text pre processor. Other languages manages this fine without it. And "Template instantiation"? Thats an implementation detail I don't want to think about.

C++ does have its nice parts and it does have its uses but some parts truly are shit, get out of the Stockholm syndrome and admit it.

No need to be rude!

Let's get this straight: I'm not advocating for the preprocessor here. I will happily put it in the trashbin where it belongs as soon as I have support for modules in C++ (By the way, 90% of the code I write is written in the D programming language, which has modules, and no preprocessor).

I'm advocating for interface files.

Whatever mechanism you use, you still need a way to separate interface files from implementation files. Interface files can be header files, they can be java files containing a single java 'interface', they can be "D interface" files (.di). It doesn't matter which mechanism you use for this, as long as you have a way for your caller not to depend on the implementation of callee (which might, for example, directly depend on a specific audio API, for example).

My point is, if you want the caller to be properly isolated from the implementation of the callee, having to maintain signatures in an interface file is unavoidable.

Call me a barbarian, but I like having the API separate from the implementation. OK there are some 'private: struct MyImpl* _impl;' things in headers, and yes templates (but you can put those in a .inl file and be on your merry way...) but overall - I much prefer .h/.cpp over .java or .cs.

Sure other languages 'manage' it fine. And if you want, you can put everything in your headers in C++, there's hardly any 'need' for .cpp files. Of course it will blow up your compile times, especially without precompiled headers (which are a pita with gcc, and relatively recent there too - gcc only started supporting them in 2006).

You don't have to split, though. You can put all your code in the header file. Your compile times will suck mightily, but you can do it, if managing headers is such a burden.

Can't agree more. IMHO, this Stockholm syndrome / religious cultist mentality is so horrible, the most horrible thing about C++..

Don't get me wrong, I can forgive all shortsighted design choices and all the bullshit that piled up over the years. Nothing is perfect, true. But what C++ does to otherwise intelligent people.. This technological enslavement is just fascinating.

I think most people view headers as a mistake. You can use it to separate the interfaces as you mention, but there are limits to how much you can do that, and parsing the headers over and over slows the build down to no real benefit.

If you look at the modules proposal Microsoft has been pushing for, it provides what looks like a better approach. You won't get the build slowness, you can explicitly control what gets exported.

>The split between source and headers is a key to good encapsulation. From a software design point of view, it's good way to break long dependency chains.

Yes, I'm currently working with Swift and I'm really missing header files. I just can't figure out how to get a quick overview of say a class' public interface without having to scroll through pages of implementation code.

Header files can be a real pain. But they are useful.

If you're working in XCode, use Navigate -> Jump To Generated Interface.

In any other editor (including XCode) and any other language you can use code folding to quickly see all functions.

Thank you! I didn't know about this. This will improve my workflow considerably :)

I think you missed his sarcasm.

I always thought it was kind of a hack to ensure consistent declarations in multiple files :) certainly not a key to something that came much later.

I hope he is better in C++ than creating frontend for websites.

Site is clearly not served with C++. :-)

Yes, and that's a pity. And I will fix that as soon as I get my hands on it - I never knew that I needed to write that web backend myself. This site proved me wrong :D

Even just reverse proxy in front (written in C++, naturally) would have probably saved the situation.

A lot of people insist on avoiding the use of "using namepspace x" in .cpp files. This makes no sense to me, it makes things a lot easier to read. It never seems to cause a problem, and it's trivially fixed if a conflict shows up due to a header change later.

> It never seems to cause a problem

It does if you want to use unity builds at some point down the road. For my app it took the build time from 25 minutes to 2 - 3 minutes which is critical when doing CI on a lot of operating systems and with a big build matrix.

I am not familiar with unity, but why should it slow down builds?

From a quick google, unity builds are akin to #include-ing every file in the project (yes, the .cpp files too).

Not sure what to make of this.


× There is quite absolutely no need for some fancypants IDE. Use a text editor that can indent stuff automatically and maybe has some syntax highlighting and you’re good to go and write all the code you want. Yes, even C++!

× The split between header files and translation units is ugly, especially if part of your class can be in a translation unit and part of it is templated and must be in a header file. But if your class is all-header, you just have a single file (which may be compiled into more than one block of code for templates); if your class is not templated, you just have a nice little header with your comments and the implementation hidden away. Yes, you may need to add a new field to both your header and your constructor implementation if you want it to be different in different constructor calls, but that’s hardly avoidable?!

× The C preprocessor might be ugly sometimes. Don’t use it if you think that. Everything will be fine. You’ll be okay.

× Namespaces are extremely useful to separate different sets of functions and classes. I find std::vector and boost::program_options::options much more readable than StdVector and BoostProgramOptionsOption. There’s only so many letters in the alphabet, names will need to be long eventually. If you really don’t like namespaces, just use using generously.

× Portability is fine between decent compilers on decent operating systems. Yes, Microsofts Visual Studio sucked big time for a long time and if you want to use a Unix-native compiler on a weird platform, things might be strange. But I haven’t seen a Unix yet where I couldn’t compile my code.

× Not sure what to make of the next paragraph, are they just complaining that they can’t keep a mental model of some header files in their mind and that this must absolutely be the fault of the language?

× I don’t get the issue with std::chrono::duration. The reference is clear, the code is not too horrible and the C# code isn’t even better?

× Hint: \n does exactly what you want.

× push_back() and emplace_back() are two very different things and I’m glad that they’re different functions, as that makes it very clear whether you’re copying stuff around or constructing it in-place.

× The (few) bits of the boost library that I used so far were amazingly well documented. Yes, if you want to dig a little deeper, you may have to read some not-so-easy to understand code, but have you seen the code that powers your lovely C# runtime? Do you think it would be substantially easier to understand?

I'm exclusively an emacs user, but I would definitely like better IDE functionality. I use the builtin fast jump-to-file and text based autocompletion, plus SilverSearcher powered text search and it is fine 90% of the time.

Still I would love to have working jump to definition/declaration/reference, type and context exact autocompletion, semantic based highlighting and on-the-fly error checking. I do not care much for refactoring support except maybe basic renaming help.

There is EDE, but in my experience it fails very quickly on anything beyond trivial projects. There are a few clang based indexing/autocompletion engines (I favor rtags) which can do all of the above amazingly well, but they are not impressively fast and in my experience any setup I tried bitrots extremely quickly.

Use rtags. Its great.

C++ is very, very hurt by not having a base Object type from which all other types derive. Because of this, we get templates, and templates suck a lot.

Generics in Java are even worse. All of the idiocy of C++ templates + Java type-erased containers, none of the benefits. I've been bit way too many times by Java generics just forgetting what type they contain.

Generics in C# are slightly better, especially in that there are a lot of different types of constraints you can apply to them. When applied judiciously, they can greatly simplify a lot of code. They are work best when everything is specified "correctly" with interfaces. Hold on a sec...

Sometimes I think templates/generics is all just a hack to get around the fact that I can't add interfaces to types I didn't define myself. When I'm designing a wholly closed system, I'm often going the old-old Java way of explicit interfaces for everything. It just keeps you honest about state encapsulation, which then keeps you honest about state transitions, which then basically just makes the program write itself.

So if you have a system like that, C# generics and its constraints work really well. But probably the entire reason you wanted generics in the first place was to not have to write all those interfaces. And you still can't add interfaces to things that you didn't write.

But at least you can write extension methods for them.

Of course, you can't get all the way to an algebraic type system with it. It makes you think you can. But you can't. That hurts.

> Sometimes I think templates/generics is all just a hack to get around the fact that I can't add interfaces to types I didn't define myself

You can though. They're called free functions. Types should only expose their basis functions[0] as members, keeping their surface small. Once you have that small set of reasonable functions, you can write free functions to do everything else. Sometimes if several of your operand types share a minimal set of basis functions it's useful to make these algorithms templates. That's all.

Scott Meyers was writing about this stuff 20 years ago. Here's a 15 year old example[1]. Here's another, more recent, article[1] by Walter Bright, which compares the approaches of C#, C++ and D.

This is the reason why C++ has the 'madness', as some people see it, of function overloading and Argument Dependent Lookup[3] and, soon, a Unified Call Syntax of its own.

So you want all classes derived from an Object base. Fine. What are you basis functions for Object?

[0] https://en.wikipedia.org/wiki/Basis_function

[1] http://www.drdobbs.com/cpp/how-non-member-functions-improve-...

[2] http://www.drdobbs.com/cpp/uniform-function-call-syntax/2327...

[3] http://en.cppreference.com/w/cpp/language/adl

When I used to use OOP, I would use void* as a poor man's base object.

It's harder to get that with non-pointers(or pointer-equivalents, like references): What is the calling convention for functions that take Object and return Object? Realize that somebody could upcast an integer into Object and then downcast into a 10000 byte struct, and this could happen across different object files that were written in different languages.

What about Embarcadero C++ Builder (old Borland)? http://www.embarcadero.com/products/cbuilder

It has a very good IDE.

Easy to add external libraries.

You don't need to use the ugly STL because it comes with it's helper classes for almost everything (the VCL library).

Nobody using it?

I use it for a living, and I cannot recommend that piece of shit.

The IDE is alright, except the IntelliSense, if you can even call it that, becomes unusably slow in sufficiently large (really, just non-trivial) projects. The 32-bit compiler mostly supports C++98, with a couple C++11 features and a few missing C++98 ones. Don't expect to be able to use many open-source C++ libraries. The 64-bit compiler is a fork of an older version of clang, so you get more C++11 support but some of the Embarcadero extensions are missing (yeah, code isn't even syntactically portable between 32-bit and 64-bit C++Builder).

The compiler ICEs if you look at it funny and the general solution to fixing a build problem is to restart the IDE. If you have a syntax error, you can basically guarantee that it won't report the correct error location (usually it'll give you some system header files).

The debugger is nice enough (it's basically a really shitty reimplementation of 1/10th of GDB, with a more-or-less functional GUI slapped on top) until it crashes, which thankfully is much rarer than the compiler.

Using external libraries is easy enough as long as they're packaged as C++Builder or Delphi components. Unless, of course, you manage to fuck up the install process (quite easy) in which case you'll be editing a few registry keys (yup). If you're trying to use a regular C++ library—assuming, of course, that it sticks to C++98 features and doesn't use SFINAE—then you usually only have to edit the library's code a bit to work around C++Builder's quirks and you're set. Oh yeah, and you can't use C libraries built with MinGW (C++Builder uses its own object file format), which means you're stuck with C89 (and a little bit of C99).

VCL makes the STL look positively dazzling. Templates were inconsistently bolted onto the VCL, so usually you've got TObject* pointers and casting. VCL objects can't be stack allocated, because... reasons? VCL's memory management is extremely unclear, but it's mostly reference counted (explains the heap-only restriction)... probably.

There's a reason nobody's using it. It's a fucking piece of trash that needs to die already.


At least there's been enough outcry at my workplace that we're switching to C# for the next project.

Such a shame. Borland couldn't compete with MS (partly due to some dirty tricks MS used) and they've been on a downward course for a long time - they sold their developer tools divison a long time ago.

C++ Builder was the C++ version of Delphi, a major productivity booster for Windows development. I remember using version 1.0 - it was truly amazing how fast one could develop UI applications with it. I would say it was about as easy as using Xcode's interface builder, when the competition was basically coding MFC classes (or WinAPI!) by hand. It had database bindings and one could link controls with data sources in a WYSIWYG interface in the time it took to put a button on the screen with MFC.

However, it was expensive and the compiler was not as good as MS's. I never had the chance to use it profesionally, everyone was using Visual C++. As the years passed I forgot about Borland...

Luckily the legacy of rapid application development lives on in Qt: it's open-source, cross-platform, with a strong community and uses modern C++ tooling.

wxWidgets is also good as an alternative to Qt, but with a smaller community. I use it and don't have any problems with it (a few bugs in the wxAui classes though).

I think everyone left Borland after their C++ Architect project was binned, the Kylix attempt ("building font metrics..."), they changed their name and core idea, and the Delphi language designer was head-hunted to design C# for Microsoft (unless I am mistaken??).

They released a PHP editor but I can't see them selling much of that.

What I don't like in QT is:

-big exe size

-STL/template code style (I prefer the C++ builder's VCL object style)

I remember that you had to keep the installer lying around in Documents and Settings (Users) for it to install updates or uninstall. So gigs of data just sat there. And it took FOREVER to install.

I remember the Intellisense would do the equivalent of a disk defragment whenever you wanted it to pop up. CodeGuard gave many false positives. The linker would crash frequently. The VCL is mostly Delphi code as far as I remember with C++ bindings in some way?? It had bugs.

The TClientDataSet item would get incredibly slow when it grew to any significant size. STL containers are 4000% superior.

I suppose this is why compiler-extensions were advised against by Bjarne, but necessary when they were introduced as the standard C++ didn't support all the __fastcall stuff Borland wanted to do, I suppose?

"It has a very good IDE."

What values of 'good' are we talking about here?

Still the fastest way to build a user interface for Windows.

On windows I have experience only with Visual Studio an this, but i don't see such a big gap in quality as described above by PeCaN. I am using it also for biggish projects and it fulfills my needs (Maybe slow on low-end PC's, but I upgraded to SSD long time ago which solved all responsibility related issues)

> Notice the “unreadable code” from C#, even the portability issue (hint: \n does not always do what you think it does).

No, it does exactly what you think it does (unless it is a binary stream), it adds an end-of-line to the stream. If the underlying platform needs a "\r\n", the stream will do the conversion.

Yes and no. std::endl is not an alias for '\n', for good reason.

I still love C++, in spite of its C underpinnings that make writing safe code an almost impossible task in large teams, unless one controls what everyone is doing.

However, whenever I see SFINAE mixed with type traits, decltypes and arrow return declarations, I am not sure if I still love it that much.

Contrarian opinions can be useful and valuable. However I don't see much substance in this article. Could someone list what this person has done to make their opinion worth something?

I can confirm that absolutely nothing of import. I think that the attention shown to my rant is overblown, although I do enjoy the good points made by a lot of commentators here.

I do not see how his example of STL vs LINQ is hard to read. I think this article is just a rant.

Then try writing an operating system in C# (or any fun language) and we can talk later which language sucks. C++ is an awesome tool when used for the right job.

I feel like this is a bad argument to be making. If a language's "level of suck" were based on its ability to write operating systems, then the massive boost in programmer productivity we see today from languages which sought to depart from that domain wouldn't exist. Tangential to that, I consider Rust a "fun" language compared to C++, and it is definitely capable of doing anything C++ can do[1].

By the way, Midori[2] was written in C#. Granted it had its own AOT compiler and started to diverge into a different language, but nevertheless, it was C#. I'd suggest looking through Joe Duffy's blog posts[3] as he explains it better than I could.

[1] https://github.com/redox-os/redox [2] https://en.wikipedia.org/wiki/Midori_(operating_system) [3] http://joeduffyblog.com/2015/12/19/safe-native-code/

>I suggest to people is to stop thinking that C++ is a superset of C. It is, but let’s forget that.

Hard to take any article seriously that makes such a glaring and naive point. C++ is not a superset of C and if the author really knew the language well, he would not make this point.

In my post I suggested to watch a presentation. Watch it, you'll understand what Kate Gregory argues, and I completely agree with her point of view.

I will watch it, thanks. But that won't change the fact that C++ is not at all a superset of the C language, they are genuinely different languages. Objective-C however is technically a strict superset of C. All C code runs as expected in Objective-C. This is not true in C++ where the same C code might compile and produce different output when run in a C++ context, or that several C idioms are not possible in C++.

"What does the 11 from C++11 mean? The number of feet they glued to the octopus to make it a better." This

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact