Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
C++ in Coders at Work (gigamonkeys.com)
178 points by edw519 on Oct 16, 2009 | hide | past | favorite | 169 comments


Part of how C++ is successful is that it retains C compatibility, and C accurately models how the computer actually works (at least computers of the 70s and 80s, which is all we know how to program well). Lovely languages like lisp, python, haskell may be nicer to work with, but they do not model the underlying machine properly, and for so many problem domains that is just not acceptable. It's not just a performance question.

And then C++ implements several programming paradigms (object oriented, generic, etc) without compromising that machine model.

Plus Bjarne was truly correct when he said: "there are languages people complain about, and there are languages nobody uses."


Bjarne was truly correct when he said: "there are languages people complain about, and there are languages nobody uses."

I think this is very true, what it says literally is that popularity and attracting ire are correlated.

However, my interpretation is that the reason why this is true is that (1) all languages involve design trade-offs, (2) every trade-off pisses someone off for at least a moment, and (3) popularity attracts eyeballs, therefore more popular languages have more people pissed off.

I am not suggesting you say this, but I have heard the (strawman?) argument that it is possible to design a nice, pure language that is above reproach but that it can't be popular, and thus his quote expresses the thought that there is an inverse relationship between elegance and popularity. I don't think this is the case.


My first impression of the quote is that any language that gains enough users will find people who complain about it.

However, I agree that his main point is more that you will never be able to design a language that is useful to a large number of people without making compromises, and with any compromise there will be at least one person for which it is not the optimal solution.

I don't want to say the relationship between elegance and popularity is strict though. There is simply a good correlation.


I think C models how the computer works a lot more closely than C++ (and no one would argue that C isn't carrying water out in the engineering world even today). IMO C++'s issue is precisely that it layers all of these leaky abstractions on top of the strict procedural model of C.

For my money, developers are better off knowing two tools (c + some very high level language) rather than the spork which is C++.


I remember 'cfront', when it first came out, and to this day I haven't really changed my mind on how I felt about it, it's a much too bloated language compared to the elegance of C.

If 'C' would have had a decent native string type I think C++ might not have happened ;)


Are you suggesting that std::string, or as my not-so-friendly c++ compiler likes to call it

    std::basic_string<_CharT, _Traits, _Alloc>::basic_string(const _CharT*,
    const _Alloc&) [with _CharT = char, _Traits = std::char_traits<char>,
    _Alloc = std::allocator<char>]
is an improvement? :-)


No, not at all. I was thinking of strings the way Dennis Ritchie would have done it.

I can see why they chose to omit it, but in retrospect I think it was a mistake. The problem they were faced with was that the language didn't include any 'runtime' at all the way they wrote it, a string package would have made it a must to have some runtime.

Everything that is 'runtime' in C is in libraries, and everything that is 'core' is in the compiler.


That's not what your compiler calls the string class. That's what it calls one particular constructor of the string class.

This constructor lets you customize the way the string class allocates memory. Now I would like to see the equivalent constructor in your favorite language :)


c + lisp, in my case. nice way to write DSLs.


The sad part is that there are a few good ideas in C++, so it is hard to throw out the baby with the bathwater. I use C++ like C with destructors, operator overloading, typed constants, and not much else.


Thus proving the author's point.


But C with vector, string, hash_map, const&, destructors and scoped_ptr is a pretty good language! It's certainly better than plain C. It's just not as aesthetically pleasing that it sits inside the larger monstrosity that is C++.


Thus proving the authors point even further.

It is funny how everybody really seems to have their own favorite subset of C++.


Just as Lisp programmers add new syntax and vocabulary to build custom languages, C++ programmers do the same by taking them away.


Does anyone really use all of Common Lisp?

I'd say the justification for all of the large languages (like C++ and Common Lisp) is the same. You may not need all of it, but the features you do need will at least be standardized, compared to using a small language with ad-hoc extensions to achieve the same.

[ I here include the standard library as part of the language, for languages with extensive meta-programming support like C++ and Common Lisp, it makes little sense to distinguish. ]


Most users also probably have their own favorite subset of Word.


You've nailed it right on the head there though, when offered a bloated solution, people will 'subset', when offered something small and powerful people will customize.


You missed my point. Most users each probably have a different subset of Word.


No, I got it all right, that's what I meant. Each user subsets word for themselves in to what they need/can handle.


No, that's still not my point. Each user has a different subset of Word. Hence, the only set that contains all of the features that all users use is Word itself. If everyone uses a different subset, it's not accurate to call it bloated, too large, or complain it has unnecessary features. All features are necessary for someone.


A feature can be simultaneously "bloat" and "used by someone".


assembly more closely models how the computer works than anything...


The chief problem with the idea that C "accurately models how the computer actually works", as you allude to but don't go into detail, is that modern CPUs work nothing like the CPUs that C is accustomed to. Between out of order execution, hyperthreading, caches, prefetching, pretending-to-be-uniform-memory-access-but-not-actually-being-it and concurrency the degree to which C's computer model is relevant is rapidly shrinking. We don't really have a good model for this, but that's not really a good excuse to keep writing new languages based on C.


"Part of how C++ is successful is that it retains C compatibility"

I think Objective C is a better object oriented C than C++. It's simpler, makes a syntactic distinction between message passing and "traditional" C, and has a more dynamic run time. I'm surprised no one else has brought up Objective C as a better solution to the "make C object oriented" problem.


Recently, I went from Ruby on Rails web programming to C++ w/Qt GUI programming. Using the C++ features that I choose, C++ is actually pretty fast to develop on... (I did MFC years ago and that was nasty but Rails internal code is also awful, with fifty-million separate classes just for exceptions).

Aside from C compatibility, C++ also allows one to create simple structures and procedures when such simple beasts are appropriate.

I currently suspect one should have either "full" object orientation plus weak typing OR weak object orientation plus strong typing. Strong typing plus full OO = Bondage and Discipline language, where the size of the code itself starts to really drag your development and debugging time down.

While Ruby or Python allows compact but slow code, Java and C#, among other excesses, force one to create a full class for every two-field data-structure. Thus C++ is more compact than C# and Java and faster than Python and Ruby, winning for race for a desktop application language (oddly enough, desktop apps need a fast language because they must compensate for desktop windowing systems being more bloated and users expect more from their machine each).

I'm sure Haskell or Lisp are excellent for some things but I don't think their paradigm is very compatible with GUI programming - I'd be interested if someone has a counter-example.


Haskell is pretty theoretical, but on the lisp front I think AutoCad and Emacs more than qualify.


AutoCAD is not written in Lisp. It has been written in several languages, including C.

AutoLisp is just a thin scripting layer.

See <a href="http://www.fourmilab.ch/autofile/www/autofile.html>the autodesk file</a>.


That 'thin' scripting layer has been used to program some pretty impressive stuff in the engineering and architectural world.

Anyway, some more examples then:

http://www.cliki.net/Application


> That 'thin' scripting layer has been used to program some pretty impressive stuff in the engineering and architectural world.

Yes, I've seen people work wonders with AutoLisp. There was - probably still is - an application used for designing clothes. The designer creates, I think, a "size 8" and a lisp script generates the other sizes.

AutoLisp being lisp-like is really a happy accident. The Autodeskers chose Xlisp IIRC because it was free and embeddable. If they could have embedded a C-like language interpreter, they would have.

In fact later, AutoDesk introduced other languages and tried to push users onto them. But AutoLisp already had huge momentum.


At least C# doesn't require you to allocate a file for your class, and a simple two-field class should consume no more than 5 lines of code. And if you're willing to use a generic tuple type, you don't need to declare a class at all; just use Tuple<Foo,Bar> as needed.


Or just use an anonymous type:

  var simple = new { Field1 = "foo", Field2 = "bar" };


Strong typing plus full OO = Bondage and Discipline language, where the size of the code itself starts to really drag your development and debugging time down

You're missing a crucial part of the equation: type inference. OCaml does what you want :-)


Sure, I just meant in the languages I've been using recently.

I someone is coming up with a type-infering variant of Ruby called Juby which could do the trick once it is work.

Now, has anyone coded a GUI with Ocaml?


Not personally (still use Tcl/Tk) , but: http://coherentpdf.com/blog/?p=38


I just have to nitpick here: Strong typing is always a good feature, and weak typing is always a bad one.

Now, whether the types should be dynamic or static, is entirely up to the situation and what kind of system you are building.


Objective C is cleaner and much more fun to write in, but it's less accommodating than C++:

* You have to actually run the dynamic runtime, which even OS X's XNU doesn't do.

* The dynamic runtime also obscures what's going on with the hardware, which can be a problem in embedded environments.

I like Objective C but it's always felt like voodoo to me. I'm always more comfortable in Ruby+C than I am in ObjC.


It looks like the only platform that had production-level ObjC library was NeXT/OS X.


The computer is actually has a von Neumann architecture, that is, they have code and data in the same address space.

To the degree that C represents this reality, it is through exploits: e.g. buffer overflows, overwriting the stack and return address, and executing arbitrary code supplied by attackers. C itself does not represent the reality very well, since it does not lend itself to writing code that writes code at runtime.


> The computer is actually has a von Neumann architecture, that is, they have code and data in the same address space.

The ones that do not are actually pretty rare.

Most of those use the 'harvard architecture' and are DSP style machines.

And then there are vector processors and SIMDs, but even those can be seen as many von Neumann machines running in lock-step.


The (vast) majority of programmers are not programming DSP-style machines, so like I said, and I still stand by it, C doesn't map well to the full functionality of the von Neumann architecture.


> C accurately models how the computer actually works

i don't even know on what kind of hardware my java or python programs run. neither google (appengine) nor our IT apartment tell me.

so for most app developers "a computer" is not really something they work with. of course someone must write those abstractions (python, etc), and they do it in C/C++ :)


But I can safely say that it is a machine that the C language models fairly precisely and that python, lisp etc will use in ways that will make it harder to predict how their constructs will interact with the machine.

That's exactly the point, if your machine is anywhere near 'standard' (as in not a SIMD or something exotic) then C is as close as you can get to it without going to assembler.


But I can safely say that it is a machine that the C language models fairly precisely

Not any longer. I don't know how C models the kind of massive parallelism that is becoming mainstream right now.


This is only true to a degree.

The only reason you know what the C you write will end up doing on the machine is because you are generally not doing much when you're working in C.

If you want to get productive work done, you'll be leveraging lots of libraries written by third parties, and you have no more insight into how they are working when you're invoking them with C than Ruby, Python, Java, C#, or any other language.


I think you're missing the point here.

A C program that accesses the memory will do so in an extremely predictable way, if I lay out a memory structure, a piece of code and an access pattern then I can be very sure about how that will interact with things like the caches.

In other languages where there is more 'under the hood' that is a lot harder.

Libraries don't enter in to the problem, that's another level altogether.

While it's true that library code written by third parties might be less (or more) efficient than the code you wrote, even those libraries will have the luggage of the underlying implementation of the language.

In C there is no implementation underneath, it is a 1:1 correspondence with machine instructions.

That is the reason that kernel code tends to be written in C.


I think my point is that you can lose track of the woods for all the trees; moreover, C is not actually a good map with the underlying hardware.

In C, you deal with so many trees, it sure feels like you're right there in the woods. But actually, if you're dealing with things on a tree by tree basis, you're not getting much done. And you can pretty lose your way by focusing on navigating territory efficiently at that level, but not keeping track of the overall goal.

As to whether kernel code should be written in C, that is a different matter. C is not a good map to von Neumann architectures because it doesn't model shared code and data in memory - C doesn't have a good representation for writing code at runtime. Most code written in C will not be the most efficient possible code for this reason.


Don't you just love drive-by readers who downvote all your comments in a thread, and never leave a reply indicating why they disagree with you? Because I sure do...


I characterize C as a portable assembly language.


Sure, portable assembly language. With, you know, named variables, type abstraction, recursive function decomposition, infix expression grammar, heirarchical flow control constructs... just a bunch of silly tricks. :)

Modern developers have gotten so used to the fact that C is "low level" that they tend to forget how low really low level coding is. There's a ton of "high level" language features in every 3GL, and that includes C. The abstractive distance from machine code to a language like C or Pascal is far, far higher than it is between C and Python.


One of the odd things about C, historically speaking, is actually how impoverished it was compared to macro assemblers written at around the same time. Those didn't have much of a type system either, but you could get all kinds of power with them. No modern assembler is really comparable, much to my dismay.


The comment was probably referring to things like "copying stuff is expensive" or "function calls are expensive", not more specific details of the machine you run on. In C++ you use pointers/references to solve the first problem, and define header-only classes for the latter. Even if you weren't aware, basic use of the the standard library will clue you in.


I totally agree. C++ main strength lies in its "portable assembler" nature. However, C++ also is [quite a behemoth][1], while Lisp, Python and Haskell aren't so.

[1]: http://yosefk.com/c++fqa/

The answer then is obvious : we should change the hardware, so it is not C/C++ optimized, but Lisp/Python/Haskell optimized. Then, these languages are easier and more practical.


Lisp, Python and Haskell should never ever be separated by a mere slash; they're totally different beasts.


Sorry, of course they are. I miss-phrased my sentence. I should have said "Lisp optimized, or Python optimized or Haskell optimized", or even "lovely language(s) optimized".

But I didn't start this: "Lovely languages like lisp, python, haskell" (sic).


"we should change the hardware, so it is not C/C++ optimized, but Lisp/Python/Haskell optimized."

http://en.wikipedia.org/wiki/Lisp_machine


In general, hardware optimized for high-level languages have been a failure, usually the extra baggage for handling things as machine-level interpreter slow things down nearly as much as a software interpreter. Nothing has been a real commercial success and it isnt for want of trying. Off the top of my head I can think of the Burroughs B5500 which had hardware support for typed data, the Lisp Machine, and the Intel 432. Sun had a Java machine specification, but I dont think anyone ever bit.


There's Jazelle for ARM, it's essentially a instruction overlay that makes the cpu execute all simple and common java bytecodes directly, and call a kernel function for all the hard ones.

However, even that is dying -- it is being replaced by the thumbEE, which is a more generic instruction set designed for efficient code generation from intermediate dynamic code.


There is Azul Systems, that produces massive servers for Java applications.

http://en.wikipedia.org/wiki/Azul_Systems

I don't know how successful they are economically though.


There is forth on hardware, and a long tradition of that. It's still alive and kicking.


IBM has plug-in hardware JVMs for its mainframe big iron. They are ungodly expensive.


According to Wikipedia, "zAAPs are not specifically optimized to run Java faster or better". Apparently they're just normal IBM mainframe processors crippled to only run Java and XML.


I can't find anything about that, it's quite interesting do you have a link ?

edit: found it:

http://news.zdnet.com/2100-9584_22-135319.html


No, C is a portable assembler. C++ was created to add "high level" OO features to C. But the success of the effort is highly controversial, because there are differences between C and the OO model that are too difficult to overcome.

I think the strategy of objective-C is much cleaner, since they better separate the concerns of "writing fast code" and "writing high level OO abstractions".


How, on a site like HN that champions pragmatic entrepreneurialism, can you possibly claim that the success of C++, a language that has been deployed probably more than any other except C, is controversial? It's a matter of fact. C++ won the early OO race. Maybe Java and C# and whatever have since eclipsed it, but it was still a smashing success by all objective measures, and still is by some of those measures. Yeah it's ugly, but end users don't care.


He wasn't talking about the success (popularity) of C++ at all. He was talking about the success of the effort of adding "high level" OO features to C.

I think C++ totally failed at being high level. So, this "success" being highly controversial doesn't surprise me.


Two quotes from http://www.jwz.org/doc/java.html

C (the PDP-11 assembler that thinks it's a language)

C++ (the PDP-11 assembler that thinks it's an object system)

C isn't really a portable assembler. It was designed to be sufficiently efficient on a specific processor (the PDP11, a register based CPU), while providing basic abstractions (function, structures…).

C was an overwhelming success. And so were CPUs fast at running compiled C programs. As far as hardware optimization is concerned, the main characteristics of C are pointer arithmetic, manual memory management, and a relatively low ratio of function calls (in C programs). These are pretty big constraints. So, we ended up with C optimized CPUs. Now what if Unix has been implemented in Forth? All mainstream processors would have been stack based, and optimized for a very high ration of function calls.

C++, by extending C, also have this "portable assembler" nature. It also have a number of incredibly low-level mechanisms which can be used to build pretty high-level abstractions, but it's still an extension of C.


C is a portable assembler. Sometimes that's exactly what you want.

C++ is C, except that it doesn't completely suck for higher-level projects.


LLVM IR [1] is portable assembler. C is "portable" in the sense that:

a.) you get direct access to your linear address space

b.) gcc targets so many instruction sets that you don't have to worry about your backend unless you have a custom platform.

C++ is the thing that screams "__gxx_personality_v0" at me because I always start off using gcc instead of g++.

[1]: http://llvm.org/docs/LangRef.html


Calling C portable assembler is not to be taken literally, but in context of its usage, which was for the most part systems level stuff that previously would have been written in assembler, requiring expensive ports between platforms.

C changed that dramatically and helped to cut down on porting effort tremendously.


I would say C++ is somewhat less portable assembler than C, due to different linker semantics, which is not unified across platforms.


Siebel mentions this point, off-handedly, in covering Guy Steele's take, and yeah, I agree.


This is a disappointingly flamey article: roughly 90% of the comments are from people who got a bad impression of C++ during the pre-standardization days (which were admittedly horrible, but long since past). Most of the rest are from people who just skipped straight to Java, or who are so young that they never had to learn C++ at all.

The comments that really drop my jaw are the people who seem to hold up Java as a "better" C++. Java is okay, but from its sketchy real-world performance, to its crappy generic programming support and poor ability to enforce such type-safety basics as const-correctness, I've never been impressed. Take the time to learn C++ for what it is, instead of "C with objects" and you'll find that it's far more elegant than the backlash would suggest.

I think that what most people really hate about C++ stems from the widespread use of legacy compilers that incorrectly implement the spec, or implement older versions of the spec. This problem exists for every programming language in the universe (e.g. Ruby 1.8.6 vs. Ruby 1.8.7 vs. Ruby 1.9), but since C++ is widely implemented by multiple vendors, the problem appears worse. If Python or Java were implemented by Microsoft and GNU and four other smaller companies, you'd see the same horrible compatibility problems with those languages, too.


The one point that I see throughout the article is that every programming shop uses it's own subset of C++. Based on coding conventions, library usage, and business domain I have seen how the C++ code used in different companies can look like a different programming language altogether.

My professional experience outside C++ is limited, but I had always assumed this was the same for other languages as well. Is C++ really that much of a different animal from other languages?


In my experience with Java, the core language is small enough that every company pretty much uses the whole thing. Almost everyone has finally upgraded to using at least the Java 5.0 syntax by now. Occasionally you'll run across silly rules that prohibit things like the ternary operator or multiple returns, but those are minor differences.

The big differences come into play with frameworks. Depending on whether you're using Struts/JSF/Hibernate/Spring/etc your code may look completely different.


Occasionally you'll run across silly rules that prohibit things like the ternary operator or multiple returns

In avionics software, the ternary operator and multiple returns are often avoided to help ensure better code coverage during verification.


I don't think anyone is writing avionics software in Java? The automated code coverage tools I have seen handle those cases just fine.


They're only doing line-based coverage analysis?


"If Python or Java were implemented by Microsoft and GNU and four other smaller companies, you'd see the same horrible compatibility problems with those languages, too."

There are a lot of Java implementations, and I am not aware of serious compatibility problems between them.


The tiny differences in implementation between the sun and ibm vm's is a regular source of deep and painful frustration for some of the people I work with.


roughly 90% of the comments are from people who got a bad impression of C++ during the pre-standardization days

Zawinski and Thompson are good, but they are not 90% of Zawinski, Thompson, Bloch, Eich, Ingalls, Armstrong and Steele. You're completely misrepresenting the gist of the article.

The comments that really drop my jaw are the people who seem to hold up Java as a "better" C++.

http://www.paulgraham.com/icad.html

but from its sketchy real-world performance

Oh please, don't start that flamewar with these kinds of uninformed, one sided utterances. Java performance has been more than fine for at least five years.


i'm not so sure about that.

a few months ago i was working on a project that involves amazon ec2 and s3. amazon gives you a bunch of command-line utilities, written in java, to perform various stuff. every one of them takes a second or two to start up, which really annoyed the heck out of me. that certainly wouldn't have happened if they were written in c++.


Startup time is, IMHO, not an essential part of a 'performance', as it is usually negligible compared to the time the actual execution (often in the order of weeks) takes. But granted, for the case of small apps performing short lived tasks, JVM startup adds a relatively large amount of overhead, which can be annoying.


I would have to agree. I have no doubt that anyone in the early days of C++ probably had a real bad experience and was right to tell people not to use it at that time.

I personally still use C++ because of the libraries and tools. With Boost, Qt, WebKit, valgrind, gdb, and others I am simply too productive in C++.

If someone told me I could only use the c++ standard library and no other libraries I could probably switch languages pretty quick. I can only imagine what it must have been like before when you got nothing.


The problem with C++ is not just bad implementation. Nowadays we have excellent, standard-compliant compilers for C++ but there are still rough edges. For example, the amount of garbage you get from template errors. You can "learn" to live with that, but it is something broken in the language, which won't be solved quickly. The amount of care you need to to write correct exception-based programs is another. And the failure of exceptions on method signatures show how we should really avoid some of C++ features.


"Yet C++ is also frequently reviled both by those who never use and by those who use it all the time."

I was once listening to RMS speak at a conference. One guy got up and said that this was one of the n great talks he had been to.He mentioned that Stroustrup as another speaker who had impressed him upon which RMS said "You need to get your head checked".

Sometimes a great hacker community can help overcome the shortcomings of a language. Trevor Blackwell makes that point here: http://www.tlb.org/faq.html.

"Besides their intrinsic characteristics, languages define commmunities of programmers. You want to choose one that lets you communicate with good programmers, because you'll learn from them. They tend to prefer powerful languages like Python, Lisp, and C++. So for example, although Visual Basic is actually a powerful and complete language, few good programmers use it. C++, on the other hand, is a rather poorly designed language, but for historical reasons a lot of smart people use it so at least you'll be in good company."


> Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair.

> And he said “no” to no one.

And that is the core of the problem.


And he said “no” to no one.

I don't really see this. Most of C++'s complexity emerges naturally from three things:

1. Lack of GC.

2. Direct support for user-defined types on the stack.

3. The desire to implement every feature as efficiently as the corresponding C idiom.

There actually aren't very many controversial features in C++: multiple inheritance, operator overloading, templates, and exception handling are the ones that come to mind. With the exception of multiple inheritance, none of these is particularly heinous. Implementing them under the three restrictions above is where things get complicated.


Operator overloading all by itself is a source of plenty of headaches, in combination with multiple inheritance you can spend a good bit of time trying to find out what something should do before you can begin to figure out what it actually does.

In most languages my preferred view in the debugger is the source code of the language, in C++ I almost always just looked at the assembly. It seemed the more 'clear' language :)

Lack of GC never bothered me much by the way, that's no different in C, and that's where I really learned how to code, so keeping an eye on memory leaks and the opposite, double frees is somehow a built-in feature (of the programmer, not the language).


Is operator overloading in C++ really more complex than it should be, or did you just pay the inevitable price for dabbling in the black art of multiple inheritance? Anytime you say "in combination with multiple inheritance" you can't expect to get much sympathy ;-)

C++ is definitely a language where you can get screwed by "clever" programmers who would rather be reading TC++PL than actually coding, but it's not so bad if the people you work with exercise good judgment. I've never had to deal with multiple inheritance in real production code, for instance.


There was a time when I made most of my income debugging other peoples programs, call it 'troubleshooter' or something like that.

It gives you an excellent overview of the various ways in which things can go wrong, and C++ figured quite prominently in the 'gotcha' department.

C has it's share of issues, double frees, failure to initialize (but most compilers catch that one nowadays), and stale pointers. With a good discipline you can work around those.

C++ can obscure the bugs in such a way that it takes you a long long time before you can 'nail' them. The biggest problem I have with the language is that the code tends to obscure what is going on at the machine level.

I guess that's a 'feature' too, but I prefer to have a more direct correspondence between program code and what goes on below the surface. That's a problem with OO in general by the way, and I think that this is part of what makes software so terribly inefficient these days.

Gigabytes of ram are barely enough to accomodate a single user os, it's really pretty weird.


Point taken; I have never had to deal with poorly written code that makes casual use of obscure C++ features. My shop tends to be very careful with the sharp edges, so you don't have to have guru knowledge to skim our code and see what's happening. All potential "gotchas" are commented, just like any other difficult code. Obviously the standards you can realistically demand from your coworkers go out the window when you're dealing with legacy code.


"The biggest problem I have with the language is that the code tends to obscure what is going on at the machine level."

Which totally defeats the purpose of being backward compatible with C.


I think that basing it off C was a quick way to get widespread acceptance though. It succeeded in that respect.


Yep. If you had a C program you were automatically already using C++. You could just suddenly start to use the extra features where you wanted them. For that it was great.

If it had been D instead, an actual new version of C designed to incorporate new features, instead of C++, additional features on top of C, it probably would be less reviled but there would have been a lot more resistance to adoption.


Please, use a precise term if you talk about multiple inheritance, even though in this case it is clear that C++-MI is the multiple inheritance is meant.

In general, there is at least the linearizing (Common lisp) multiple inheritance with a well defined method lookup semantics (even though there are good ways to shoot one into the foot there), and the C++-local-hackity-multiple inheritance (where the programmer has to resolve lookup ambiguities, resulting in unpredictable semantics). And this even avoids the discussions of traits, mixins and such.


Different problem, different constraints, different solution. And, inevitably, just different ways to shoot yourself in the foot. Please explain how Common Lisp multiple inheritance relates to the question at hand, which is whether the complexity of C++ came from undisciplined language design or was inevitable given the design constraints of the language.


The distinction matters, because the multiple inheritance in common lisp (and other linearizing languages) is a lot saner and easier to use than multiple inheritance where the programmer has to define the method lookup sequence, which in term implies that multiple inheritance in itself is not the sole cause of complexity in this context, but rather the implementation of multiple inheritance in the given language (C++).


CLOS method dispatch does not meet the design criteria of C++. It uses runtime information about inheritance relationships, for example.

Also, note that there is also a semantic difference here between C++ (and Java) and most other languages. In C++ multiple inheritance, if classes A and B define method foo, and A and B have no inheritance relationship, then A::foo and B::foo are entirely unrelated methods -- just as if they were named A::foo and B::bar. The fact that they have the same name does not automatically create a relationship between them, as it would in CLOS or in duck-typed languages like Python. That means that a call to foo in a class that inherits from A and B is just as ambiguous as if the programmer had typed "foo, or maybe bar." This unrelatedness is inherent to C++'s type system. It would make no sense to a C++ programmer to treat A::foo and B::foo as being alternative dispatch targets for the same method call.

Even if you allow the compiler to gloss over this unrelatedness, I still think it is a matter of taste whether it is simpler to require programmer disambiguation or to have a well-defined algorithm for resolving ambiguous names. C++ usually takes heat for providing powerful more-than-meets-the-eye mechanisms that allow programmers to "hide" program semantics inside language features. Here C++ takes the opposite approach and requires the programmer to explicitly resolve ambiguous names, and it takes heat from a CLOS programmer, who could write an entire operating system using a single family of generic functions all having the same name ;-) Damned if you do, damned if you don't.


> who could write an entire operating system using a single family of generic functions all having the same name ;-)

I sense an upcoming challenge! ;)


Operator overloading in C++ is indeed more complex than it should be, thanks in part to the three different ways to pass paremeters (references, pointers, and values). It's definitely more straightforward in languages like C# or Python (or, well, just about anything else) that don't have these explicit distinctions.


The distinction between pointers and values (and the ability to treat pointers as values) is necessary for many applications of C++, so that complexity _should_ indeed be in the language. You may be right that the distinction between pointers and references is gratuitous. Ordinarily, one defines an overloaded operator using reference arguments (never pointers; only values if you're doing performance microoptimizations, i.e., only for a handful of carefully inspected types) and dereferences pointers before applying the operator, so it doesn't really matter unless code in some other module is overloading operators for your types. That's a bizarre circumstance that never happens in practice.


"that complexity _should_ indeed be in the language"

Oh, absolutely. I just meant that it adds some incidental complexity to operator overloading that doesn't exist in other languages.


Actually, the book "Design and Evolution of C++" shows Stroustrup saying "no" to quite a few features (e.g. named parameters). It would probably be more proper to say that Stroustrup didn't say "no" to enough people.


I see this claim made frequently, but Stroustrup talks about several features in Design & Evolution that were rejected. The committee requires implementations before accepting new features; they don't say "We like that idea" and include it without evidence that it's both possible and useful.


I think the main thrust of that argument has to do with trying to please too many people that were willing to put their support behind C++ if the language would support their pet construct.


It was more the other way around, I think. Implementers had a lot of power to say, "No, this can't be done efficiently," and as a result, the burden was passed to programmers instead.

Can you give an example of a "pet construct" (aside from major features like multiple inheritance and exception handling) that could have been left out?


I think operator overloading, which seemed like a 'great idea at the time' with some forethought should have never been part of the language in the way it works right now.

The times that I ran in to examples where it was used properly and elegantly I think it could have been done just as nice with a properly named function call.

I think the way it workd came from the DSL camp arguing that you should be able to make it look as though certain bits where native to the language, even if they weren't, and then the only uses we get are things that have nothing to do with DSLs but everything with children handling power tools (as in: bad idea).

Show me a single example of where operator overloading was necessity to make a function work, and I'll show you a more cleanly coded version using a function call.

The interesting thing is that after the compiler is done with it you'll be calling ::operatorX anyway.

Plenty of times there are naming issues as well, what does it mean to add two items of 'X' together, most times such a statement would be meaningless in the real world, but if you were forced to use a named function instead of an operator that function would have a name that matched what really happens.

Naming stuff is one of the great powers of computer languages, use of operators should be reserved to those situations where the outcome is predictable, and where operator precedence rules are the same as they would be when adding numbers.

So, it's fine to add an instance of a class called 'area' to another instance of that class, but you can't add one instance of 'boat' to another. That one should be done using:

   cargospace = add_carcospace(boat1,boat2);
or something to that effect.


use of operators should be reserved to those situations where the outcome is predictable, and where operator precedence rules are the same as they would be when adding numbers.

So what's stopping you? That's exactly the way I do it. In my C++ career I've only overloaded math operators for math types, * and -> for smart pointers, [] for container-like types, and () for callable objects. At least in production code... I did some crazy and irresponsible things in college, just like everyone else.

If there's one point I've been wanting to make over and over again in this discussion, it's that misuse of language features is a much smaller problem than people think. (In practice, it is even negligible next to the overhead of reading poor error messages from the STL.) C++ is a very bad programming language for people who fear their coworkers, but so are C, Ruby, and any Lisp with macros. Even Java isn't protection against stupidity: a bad Java programmer on your team can't confuse you, but he can bury you.

Just think of defmacro as implicitly adding an infinite number of infinitely complex language features to Lisp. If you can deal with defmacro in a responsible way, operator overloading shouldn't bother you at all.


I see it as a traffic problem. I'm not that scared when driving of what I'll do next. But some of the other drivers on that same road scare the willies out of me.

Same with language features. If all you have is bicycles then nobody is going to hurt anybody badly. But if we're all driving rocket cars at 1000 MPh then the resulting pile-ups will be spectacular.

You use operator overloading in a disciplined fashion, because you understand fully what the consequences are when you are using them improperly. If a situation is a 'natural' for overloading then you'll use it, otherwise you'll use a function with a descriptive name.

Unfortunately that is not the norm. You'd almost want an '--expert' flag given to the compiler before you are allowed to use those features, and every time you want to use it you get this trick question about some obscure bug that you have to answer :)

The best little C bug I saw in some forum somewhere by the way was this one:

            int x,y;
    
            x = 0; /* initialize both x
            y = 0;    and y to 0 */
   
            domorestuff(x,y);
Of course any C programmer worth his weight will spot that one in a heartbeat, but it is indicative of how easy a language 'feature' (multiline comments) can become a bug.


That's also an example in favor of why syntax highlighting is a good thing.


Nowadays modern C/C++ compilers would complain about that right away (uninitialized-value). Even if you have warnings turned off, then certain compilers would catch this runtime.

And as the below(above?) poster said - there is syntax highlighting. It's not the greatest example of failure.


>done just as nice with a properly named function call.

So instead of view = projection * model; you would prefer.

view.SetVector(Multiply3x3matrixWith3x1Vector(projection.GetMatrix(),model.GetVector());

Should we allow floats an doubles to be multiplied with a'*' or should that use properly named classes


Actually, you sort of prove my point there.

How does someone looking at the code know which parts of projection and model get multiplied here ?

At least make it:

     view.setVector(projection.getMatrix() * model.getVector());
Because '*' applied to a model or a projection could literally mean anything, there might be many elements in those structures that are candidates for multiplication.

By making it explicit what gets multiplied the code is less clear than it could be.

If they're simple arrays without further fields attached to them then you could do:

    view = matvecmul(projection,model);
And that's pretty close to the overloaded example.


The point is that there are fixed rules in Maths for what happens when you multiply 3x3 matrix by a 3x1 matrix and if this is even possible. Those should be enforced by the person writing the library's overloaded features not be left to the programmer to know which is the appropriate function to call.

Multiplying matrices is no more weird than multiplying a float by an int or a positive and negative number.


http://spirit.sourceforge.net/

It's a DSL that is similar to EBNF that generates parsers. It would be considerably more difficult to use without operator overloading.


Agreed, DSLs are the one area that I can think of where overloading is a 'natural'.

Unfortunately its use is not quite limited to code like that.


Well, he did say No to the concepts just few months ago, and that was one of more useful features proposed for C++0x. Though one has to admit it was an over-engineered, ugly monstrosity (surprise) and it was quite far from a simple and elegant idea that was long lobbied for by Alex Stepanov & Co.


There are parts of all languages that I find confusing, but I'm a little puzzled by the general criticism of C++ (e.g., the __getitem__ feature of Python seems to be similar in spirit to operator overloading. Do people criticize Python too?).

IMHO, C++ is an amazing extension to C which managed to increase productivity for those tasks for which C would have been the right choice (modulo non-standard compilers).

If I were writing a performance-critical system-level program today, using C++ would be no brainer. You can fake a lot of what C++ offers with macros and function pointers in C, but lose type-safety and/or performance. I am very thankful to all the people who have brought C++ to the state it is in today.


Its really fascinating to me. We had Lisp and then the road for many lead to C/C++, Java, and related and then we needed a way to express data in more human-readable form so some genius came up with XML. XML was to cumbersome (close tags, how attributes are expressed, etc.) and then JSON came into the picture. So all that to just come back to where we were in the first place.

Now I clearly understand what some of the veterans in "Coders at Work" really meant by "Very little progress, if any, has been made in the Programming space".


JSON is not a programming language.


You are absolutely right. My point exactly.

Now you need "two hammers" to get a single job done. Your current programming language of choice and a data expression language.

For example: In C, C++, Java, or related, you "first" have to build you structure to represent a Person with first_name and last_name and then you have to write it to JSON:

Java:

  class Person { String firstName; String lastName; }
JSON with JavaScript evaluation:

  var names = [{"firstName": "John", "lastName": "Smith"}, {"firstName": "Bob", "lastName": "Jones"}]

  eval("(" + names + ")");
Lisp handles both naturally. You don't need intermediary human-readable data expression language. It comes natural to the language itself.

Lisp ver:

  (defvar names '((:firstName "John" :lastName "Smith") (:firstName "Bob" :lastName "Jones")))

  (eval names)


> or example: In C, C++, Java, or related, you "first" have to build you structure to represent a Person with first_name and last_name

Not if you're using Protocol Buffers, which is essentially a version of JSON that has "batteries included."

> Lisp ver:

Lisp is not the only programming language around. How am I going to consume that data from another language? Embed an entire Lisp interpreter?


Not all problems require communicating your data structure outside of the program. And I don't think it's accurate to say "Now you need." For problems where you do need to communicate data outside of the program, that's been true since the days of Fortran.


Which just goes to show we still have not made all the way back to where Lisp was a few decades ago.


Consider that JSON has features Lisp does not. Like a tiny implementation, very wide language interoperability, being purely a data language (sandboxed by design), and not being Turing-complete (time to interpret is automatically bounded).


> (time to interpret is automatically bounded).

Well, at least for that particular set of cases I think you've just solved the halting problem ;)


C++ is pretty awful to develop in, especially compared to many other languages. Some of these issues have to do with poorly designed features in the core language (templates), static variable instantiation, no good way for declaring complex data structures in the language (a'la JSON), multiple inheritance, etc.

But this isn't just a problem with the core language itself, but with the compilation model and linkage model - and the compilers and linkers. You need include guards on the top of your header files to make sure the code doesn't get re-included. You have to aggressively forward declare to work around declaration dependencies. And you have to trim the files that you include to keep compile times under control, because here's the fun thing about C++: For every .cpp file in your project, you will end up reading in and parsing the same damn header files again and again. In a large project, that might mean recompiling the same header files 1000s of times. Precompiled headers aren't a good solution, either - because when you have to touch the header, you pay a huge cost in recompiling the PCH.

In every other language I've used, I haven't had to worry about carefully restructuring code (sometimes at a cost in performance) to reduce compile times - because the core include and dependency mechanism sucks so badly.

Finally, C++ as a language is very poorly extensible. By this, I mean in compared to languages like LISP (with macros), or Ruby / Scala, which both make writing DSLs very easy. This is especially unfortunate, given that is so heavily used in a lot of very specific domains. And don't get me started on templates - I couldn't dream up a more backwards and awkward way of doing generics if I tried.


I noticed this as well when I read the book: http://www.cloudknow.com/2009/08/review-coders-at-work-by-pe...


I think it's a very good article detailing the various flaws of C++; I particularly have problems with the complexity of the languages making it very difficult for compiler writers to generate usable error messages (not only true for gcc as mentioned in the article, but Microsofts C++ compiler generates many unusable error messages, too! [At least it did in 2004, never touched it again...])


the point about everyone making their own sub-language is exactly right. In fact, I've heard programmers say that's exactly what they love about it.

Problem is, every shop is inventing their own subculture. When programmers who haven't worked together look at each other's C++, they're basically learning a new language.

I do think this has changed a bit in the last few years. The whole 'design pattern' meme exists primarily to create nexuses of style that programmers can gravitate to. It just seems like such a slow way home, especially in comparison to a language like Python, which goes out of its way to restrict the number of ways you can skin the cat.

Has any serious programmer ever looked at a Python file and said "wtf?" for more than a minute or two? And, has any experienced programmer not had the experience of looking at a new batch of C++ (say at a new job) and thought to him/herself, jeez, this is going to be a lot of work just to understand the basics of what they're doing?


I'm liking these Seibel digest posts more than I'm liking the actual book, which, even in the Norvig chapter, drags a bit.

I think the book might have worked better if it was organized topically, instead of by person.


Of course it's going to be more convoluted then newer languages like Java: Java doesn't have to maintain backwards compatibility with a 35+ year old programming language. That's just stating the obvious.

Yet for some reason people still use it. The question is 'why'. For many cases (not all though), I think the reason is largely just institutional inertia - you stick with the devil you know rather than the devil you don't know.


Have there been experiments with creating a pre-compiler that checked that you were using a subset of C++? What downsides would this have (other than longer compile time)?


But which subset ?

When I program in C++ (and sometimes you have to) there are features that I'll avoid like the plague, once bitten, twice shy.

In fact, my subset of C++ was usually try to stay as close as you can to C and use C++ when you have to. That seemed to be a pretty safe route.

Most of my C++ stuff was using Borland C++ Builder or Microsoft visual C++, I'm happy to say I no longer have to support software for the windows platform, so no more C++ for me.


I try to stay as close to C as possible. But C++ has many temptations. If you start using boost, for example, you will be in no time using features of C++ that are not even in the standard yet :-))


I just use gcc to compile the code. That way I know what subset I am using and the rest of the code is using.


The google style guide posted elsewhere in this thread links to this:

http://google-styleguide.googlecode.com/svn/trunk/cpplint/cp...


It's the only language I really feel comfortable programming in, and everything else seems 1) slow or 2) alien in comparison.

I'm digging scala's design, but have yet to really compare it for number crunching, and big library building.

By the way, the article didn't really dig into any specifics besides saying there was feature bloat in C++. Which languages don't have feature bloat? How much use can I get out of them?


How could people complain on c++ when so many daily useful tools were written in it? Even their beloved JVM is a mere C++ program. Consider very successful and really cross-platform Webkit, V8, Mozilla's Geeko and so on. And there are thousands successful proprietary projects like Informix and even this popular MySQL.

If Google, Mozilla and many others can productively use C++ why other people can't? They even published guidelines how to use stable and portable subsets of language.


Isn't C++ Compiler written in C?


C++ sucks, why do all those idiots at Google use it? /sarcasm


Argh. Not a very useful article. Start with an observation: "Everyone hates C++, and yet it is really widely used. That's odd." Next, 10 paragraphs of people hating on C++. End with: actually wait, no ending.


Well, the 'everyone' he is talking about are some of the most respected people in the computer world, that should count for something.

Also, they make fairly specific criticisms, and they have a track record of being right about such things.

Java seems to be the C++ replacement of the future, with C# pulling the other way.


"with C# pulling the other way"

Care to elaborate (I'm legitimately unsure what you mean, not being snarky)?


I think that C# is an attempt at creating an alternative for enterprise level programming to Java, basically .net and C# are an answer to the whole Java ecosystem.

They're competing for the same space, so a business that wants to implement some functionality from scratch basically has these two solutions to choose from.

In the 'web' domain there are many many more choices, but for places like banks and such that were first dominated by COBOL, then reluctantly moved to C or C++, for the most part they are either looking at Java or C#.


A-ha, yeah. I read that originally as Java and C# pulling in opposite directions in a fundamental way, not as a mind-/market-share sense


The point I was trying to make wasn't about C++, it was that the article starts with a question it never answers.

It would be a more honest opening to just say "lots of smart people hate C++", and then proceeded with the examples.

Instead, it pretends like it is going to answer the question of why it is so popular in spite of being widely reviled, but then never touches on that.

edit: grammar


Well, the 'everyone' he is talking about are some of the most respected people in the computer world, that should count for something.

And while they're busy bitching and hating and pontificating, humble unknowns are getting things done and making the world a better place. C++ is to systems programming what PHP is to web programming: the loyal packhorse that the A-list wouldn't be seen dead on.


That is very true.

PHP, Python and perl are rarely seen at corporations that existed before the web came around. Their whole IT department is set up around a different kind of environment.

I think this is in part why the web is so disruptive, because it enables all these upstarts using very light and nimble stuff to challenge the big and established companies directly.

You can pretty much tell what is enterprise stuff and what is 'quick & dirty but does the job' by comparing hourly rates for programmers.

It also has everything to do with the length of time before someone can be productive in a new environment. If that length of time is very short then there is not a very high barrier, which tends to create a lot of 'wannabe' programmers in that language.

By raising the bar you only end up with people that are willing to invest a large amount of time in a platform, that tends to favour people that get paid for their work, which in turn is found mostly in enterprise locations.


PHP, Python and perl are rarely seen at corporations that existed before the web came around. Their whole IT department is set up around a different kind of environment.

Not sure if this is true in general. I work at a bank. We use a lot of Perl. Banks existed long before "the web", and our Perl stuff is not a web application either. All the internal web apps are Java.

It is kind of weird, actually -- Java for stuff like the HR apps, and Perl for the stuff that makes us money. (If it were my decision, I would use Perl for the web apps and Haskell for the algorithmic stuff... but it's not.)


> We use a lot of Perl.

That's interesting! That's the last thing I would expect from a bank actually, wonder how common that is.

But then again, is there even an 'enterprise scripting language' ?

> All the internal web apps are Java.

That's what I would expect. Sun really did some good marketing in that sphere.


>> We use a lot of Perl.

>That's interesting! That's the last thing I would expect from a bank actually, wonder how common that is.

I've seen a lot of Perl used in banking/finance. It really is the "Practical Extraction and Reporting Language" and certainly seemed to have almost completely replaced shell scripting in the last place I worked at. One interesting anecdote about Perl was that a group I worked in originally chose ADA as their language when they were just starting out but eventually gave up and went with Perl just because of it's relative power and simplicity. That company also has a major initiative to replace alot of what was written in Perl (and runs the company to this day) with a system written in Java. That system is still not doing any serious production work and is at least a couple years late. I don't think it has as much to do with the Java language as the "this will be the mother of all systems" approach they tried to take with it in the beginning though.


The area of a bank that generate money employs the smartest people, usually coming from academia. So, it doesn't surprise me.


Good point.

Yours and gcheong's comment just above makes me wonder if someone missed the boat in getting an enterprise level scripting language out the door. It looks like there never was a 'natural' successor to stuff like JCL.


Sun and MS are trying to bring scripting languages to their managed environments. .NET already supports Python reasonably well (I was surprised to see that SharpDevelop http://www.icsharpcode.net/OpenSource/SD/ can easily convert most of my C++ into Python).


I completely agree. While C++ certainly isn't perfect, those of us that write real code that is used and maintained day to day can be very productive.

While I know who most of the people in the article are and respect their contributions to the world of software engineering, day to day real world pressures of delivery aren't the same as an example in a book.

A hammer isn't an elegant tool, but it gets the nail into the wood.


If you code in a 'safe' way and you stay away from nifty tricks then you are in good shape. I'm sure you have a little list of stuff that you should stay away from or that you would at least discourage.

The problems usually come when people start to use all those nifty features in combinations, especially 'newbie' programmers going wild on all that high level sugar.

C/C++ is performance wise also as good as unbeatable, if you know what you're doing, which is another reason why it has such staying power.


But think about open source code. Anyone can contribute code, so if you're using C++ you will get code from people from all levels of understanding of C++. This is a reason why some open source people prefer to use C, because they won't spend time saying what can or cannot be used in the project.

An example is Google. They had to create a huge document to describe what part of the language their developers can use: http://google-styleguide.googlecode.com/svn/trunk/cppguide.x...


Most of that is 'plain good sense' in any environment, here are a couple that target specific C++ pitfalls:

- Only very rarely is multiple implementation inheritance actually useful. We allow multiple inheritance only when at most one of the base classes has an implementation; all other base classes must be pure interface classes tagged with the Interface suffix.

- Do not overload operators except in rare, special circumstances.

- Make all data members private, and provide access to them through accessor functions as needed. Typically a variable would be called foo_ and the accessor function foo(). You may also want a mutator function set_foo().

- All parameters passed by reference must be labeled const.

- Do not use function overloading to simulate default function parameters.

- We do not allow variable-length arrays or alloca().

- Be very cautious with macros. Prefer inline functions, enums, and const variables to macros.

The only thing that struck me when reading this that I do different is the use of ++p, I would avoid that at all costs.

Pre-increment is a great way to confuse people. I can see why they would use it though, another similar source of confusion (stemming from C really) is the difference between

    *p++ 
and

    (*p)++;


When working with objects you should always do ++p instead of p++. The later one creates a temporary, so you have to work extra hard for nothing.


That's an interesting one, I never even thought of that! Thanks.


Hence the old joke that C++ should really have been called ++C because "C++" violates good C++ practice.


Google creates a huge document to describe what part of the language their developers can use for every language.


True, but the topic here is C++, and it is not so interesting what they allow people to use as to see what are the points that are not to be used.

You can bet that behind each and every feature that they decided to rule 'against' was some pretty solid thinking and possibly some very hard-won experience.

There's a guide for:

   C/C++

   R

   Objective-C

   Python
see here:

http://google-styleguide.googlecode.com/svn/trunk/


I have a list of things to stay away from indeed, but it's due to my own sanity of being. For example, without GC exceptions are almost always going to cause leaks without the team being all C++ experts. If I was designing in Java exceptions are wonderful, in C++, asking for issues


exceptions are almost always going to cause leaks without the team being all C++ experts

Don't give up exceptions; give up naked pointers. Use shared_ptr and scoped_ptr, and you won't have memory leaks due to exceptions. Use RAII consistently, and you'll find it's actually easier to handle resource management in C++ than in Java. Java's resource management is superior for memory but inferior for every other resource.


> but it's due to my own sanity of being.

I think that is what such lists have in common :)

> For example, without GC exceptions are almost always going to cause leaks without the team being all C++ experts.

That is something I rarely have had trouble with, but I never worked much in teams. I did notice that the double-free and never-freed pointers are a good portion of the problems you encounter in live code though.

And every time firefox goes EWOL I'm fairly sure that somewhere deep down someones code just tried to reference a stale pointer or something like that.

Even in Java exceptions are not all that wonderful, I think they are just a way to say "I don't feel like coding the stuff required to handle this situation properly, let's thrown an exception".

You end up with more exception handling code sometimes than functional code and that isn't the right balance either.


Throwing an exception isn't saying "I don't feel like dealing with this", but rather, "A problem has occurred". In C++ (ignoring exceptions), you typically (or hopefully!) return an error code and rely on the caller to handle the issue. In Java, with checked exceptions, a catch or declaration that the current method can throw the same exception is needed for compilation.

Now, this doesn't stop empty catch blocks or other horrible coding practices, but professional software engineers should never let that happen in their code (and there are lots of people who do, and they shouldn't be getting paid to write code!)

Checked exceptions wouldn't fix the memory leak issues in C++, I routinely use the shared_ptr template to fix that, but it would be a good step forward.


Aye, I hear you. But the problem is not that people use stuff the way it is intended, but they will use it in ways that it is not intended!

> Now, this doesn't stop empty catch blocks or other horrible coding practices,

Exactly...

> professional software engineers should never let that happen in their code

I could show you some horror stuff I'm dealing with right now that does exactly that.

The bigger problem is that it works and that I'm having a hard time convincing other people that that is not how it is done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: