Hacker News new | past | comments | ask | show | jobs | submit login
Common C++ Gotchas (vickychijwani.me)
91 points by vickychijwani on Nov 7, 2014 | hide | past | favorite | 46 comments

#3 should really say make public base class destructors virtual

#6 sometimes you actually will want an A(A&) constructor... for example, if you have constructor overloaded with a greedy template argument.

#8 objects returned by value will also bind to an rvalue reference

#9 Remember in C++11 there's also new T{};

#10 isn't absolutely correct either, dynamic_cast can actually be used in some circumstances where static_cast or implicit casting could have been used.

    struct A {

    struct B: A {

    int main() {
        auto b = new B();
        auto a = dynamic_cast<A*>(b); // fine

#3 should really say "make polymorphic public base class destructors virtual."

There are reasons why you might want to use public inheritance that have nothing to do with run-time polymorphism.

I haven't touched C++ in years. Why would you ever make a non-virtual destructor? Isn't it totally up to the user of a class if they choose to inherit from the class?

It seems like, if you're not providing a virtual destructor, you force users into delegation rather than inheritance, and that will catch some % of developers who aren't on the ball. They'll have leaks...

As far as i know (old info) there's no such thing as a "final" class, so anybody can inherit even if you don't want them to.

Having any virtual member functions, unless the compiler can devirtualize it, will hurt both space usage (since you now need a per-object vtable) and cache performance (since destruction now has to go through an indirection every time, instead of being a globally visible function).

You have the `final` specifier since C++11 to state a class cannot be inherited from, but this notion of "You really shouldn't inherit from this class." has been around since classical object orientation has, C++ just couldn't express it as a language.

So if your class currently isn't being inherited from, make your public dtor nonvirtual. If and when it needs to be a base class, then the author can modify your class to have a virtual dtor, or use composition (which is often the superior alternative). If source code is not available, then composition it is. I would advise against making it virtual "just in case", that's premature pessimisation.

> per-object vtable

You mean per-object vtable pointer? Or per-type vtable?

Per object vtable pointer, yes.

Inheriting from value types is a bag of hurt thanks to issues like slicing: http://stackoverflow.com/questions/274626/what-is-the-slicin...

Anyone can technically publicly inherit from a value class, but if they do it's absolutely their fault :)

Right, but unless your Base or Derived class is going to pass Base*/&s to friends and/or other instances, you typically don't want your protected destructors to be virtual just so you can get safe destruction. Of course, the thing with C++ is, the longer you sit and think about it, the more reasons you can come up with a reason to do something atypical

Slicing isn't even necessarily harmful either... but I'm not sure what you had in mind.

Could you show an example for what you meant in #6?


    struct A {
        A (A const&); // copy constructor;
        A (A&&); // move constructor
        template <typename T> A (T&&); // template constructor

    int main() {
        A a1;
        A a2(a1); // calls the template constructor because a1 is an lvalue

Why are you using delete so often in C++...? 'delete' should be rare in your code.

Someone over on Reddit asked the same thing. My reply:

You're exactly right. But there are multiple complications:

- unique_ptr and shared_ptr are only available in C++11 onwards, we're still to migrate to a new compiler

- auto_ptr is a joke [1]

- I've got to live with `delete` in our gargantuan codebase at work

- I could (and should) use boost's smart pointers, and indeed I do when possible. But I admit I sometimes even forget to use those and end up with naked pointers instead.

[1]: http://stackoverflow.com/a/111531/504611

Promiscuous use of smart pointers is almost as bad a code smell as manually newing and deleting stuff. Ownership should be object lifetime, and it's really not hard to arrange member objects to make that work.

And if you really can't, and you need a container that automatically frees its contents when destroyed, just stick it in a STL vector.

Promiscuous use of shared pointers is a code smell.

std::unique_ptr (or its predecessor, boost:scoped_ptr if you don't have access to C++11) should be used all over the place. There are many reasons why you may not want by-value containment of objects, and std::unique_ptr gives you object-lifetime ownership of pointers.

The specific situation being addressed was C++03 code. But, sure; unique_ptr is harmless and mostly impossible to misuse. (Though again, to be fair, it provides little real value that the "stick it in a vector" trick didn't in 1998).

But aesthetically, I've learned to distrust any code that likes to throw objects around the design by pointer. It's untidy, and leads to needless abstraction and poor coupling. The time you spend trying to fit your objects into by-value composition is time you save later not having to untangle your knots of pointers.

The C++03 equivalent to unique_ptr is boost::scoped_ptr. It has some corner cases that are not as elegant as unique_ptr, but it was still good enough for Google for many years, until C++11 was greenlit in production.

You're missing the point. I like unique_ptr. But pointer composition in general is a smell, and unique_ptr is at best a patch around that.

The whole notion of having to carefully manage ownership via syntax (seriously: try explaining to a python programmer why they have to use this insane "unique" thing just to store a reference to something) instead of via structure (put the object inside its owner) is just vile, aesthetically. And it's something that C++ nuts tend to be really bad about.

> There are many reasons why you may not want by-value containment of objects

I'm interested in reading them actually, would you mind listing them?

Nullability - a scoped_ptr or unique_ptr can be null, a value object can never be.

Polymorphism - a value object is always exactly its static type, but you can substitute in different derived types for a unique_ptr. The previously-held object is automatically destroyed when you do this. This is essential for the State and Strategy patterns.

Reassignment/swap - reassigning a unique_ptr destroys its previous value and transfers ownership of the new one. Reassigning a value object invokes operator=. The former can (sometimes, not always) be faster than the latter - think about cases where you just want a quick pointer swap inside the object, rather than having to shuffle all the bits in a large struct around.

Release - unique_ptr has a member function to release ownership of the object, while this concept doesn't apply to value objects. Where might this be useful? Well, imagine trying to achieve exception safety inside a factory function with complex construction logic, and then transferring ownership to the caller. It's common to immediately assign the new object to a unique_ptr upon construction (so if an exception is ever thrown, it is destroyed properly), perform your logic, and then return ptr.release().

Your release() example is kind of interesting actually, since in most cases RVO will kick in and allow you to return a unique_ptr without fuss.

    #include <iostream>
    #include <memory>

    std::unique_ptr<int> foo() {
        std::unique_ptr<int> x(new int(42));
        return x;

    int main() {
        std::cout << *foo(); 
the folllowing also works:

    std::unique_ptr<int> foo(std::unique_ptr<int> x) {
        return x;
the following however will not work because RVO cannot be employed (the space occupied on the stack for the return value can't be shared with the source object)

    std::unique_ptr<int> foo(bool b) {
        std::unique_ptr<int> x(new int(42));
        std::unique_ptr<int> y(new int(17));
        return b ? x : y;

Wups, yeah, it occurs to me that a factory function would transfer ownership so a unique_ptr is a more appropriate return value.

You could imagine another example where the goal of the factory function is to construct an object and then set one of two or more other owned pointers to the value. You still want something to hold ownership of the pointer, but don't know its eventual home until after some complicated logic finishes, and don't want to pay the copy constructor cost. That might make a better illustration (or technically, you'd use operator= instead of release(), but same basic point).

You can use std::move() to make the last example work, so it's still easy to return unique_ptr from more complex functions.

Thanks for listing them! See my comments below.

> Nullability

What do you think of boost::optional?

> Polymorphism

unique_ptr is probably the wrong choice here. You should be using ptr_vector: http://stackoverflow.com/questions/9469968

> Reassignment/swap

The indirection and heap (de)allocation also imposes a slowdown, though. Have you actually experienced this to be worth it in some cases, or is this just a guess?

> Release

I don't think this is a valid reason...? This is a pretty common idiom for releasing value types:

Type().swap(existing); // release 'existing'

Why shouldn't you?

You don't always have control over the code you are writing, an example that comes to my mind quickly is ffmpeg.

Because using new and delete bare makes writing exception safe, leak-free code about 10 times harder. And I'm pretty sure ffmpeg is written in C.

How does ffmpeg relate to this, isn't it written in C?

He may be talking about interfacing with C code of that nature. (You can still localize most of your new/deletes in to the wrapper classes though)

You still need delete for some things.

Smart pointers are not a panacea.

The only common exceptions for smart pointer usage should be for interfacing with an awkward C library or the OS, in which case you'd typically be using specific memory allocation functions, not new/delete.

Sorry I meant to ask why is it a habitual thing.

It's not only C++ that has problems with throws out of destructors or similar logic. How about Common Lisp:


"The ambiguity at issue arises for the case where there are transfers of control from the cleanup clauses of an UNWIND-PROTECT. ..."

It is not clear what exit points are visible when the cleanup-forms (morally similar to C++ destructor calls) are invoked. Are the exit points that are skipped torn down all the way to the control transfer's target, or is the tear-down interleaved with the unwinding.

The real answer, as far as I can tell, is "Don't use exceptions in a program that's too important to crash."

Right, and to that end, don't use a CPU that traps illegal instructions, divisions by zero, pointers to unmapped pages and so on. Those are all forms of exceptions.

I wrote C++ code for about 10 years before moving over to C#. I still like C++, but posts like this make me remember why I don't miss it. It's so baroque.

#1 should really say, "Avoid new and delete." new and delete are for experts. The average C++ programmer should go nowhere near them. If heap memory is used internally by a class that was written by someone who knew what they were doing, then fine. Use your std::vector. But new and delete are huge red flags in code.

It's fine to use new as long as you're immediately sticking the result in a smart pointer. But yes, delete should be very very rare in modern C++ code.

With std::make_shared, and the addition of std::make_unique in C++14, you shouldn't ever need to use new for this purpose either.

It depends. The trick to using new correctly is to put it in a function whose sole purpose is to immediately store new's result in the smart handle. I.E. You're writing a function that's analogous to the make_unique function.

You might need to do this if you're implementing your own smart handle or writing a factory function for a class with a private constructor (make_unique will not work).

Off the top of my head, I can't think of any good reasons to use delete.

If your handles are pointer types, you should still use unique_ptr, you just need to give it a custom "deleter". (Of course you should probably still encapsulate this in a function like make_xxx() just to avoid the potential mistake of not supplying the correct deleter everywhere.)

and yet, "new" and "delete" are amongst the very first things covered in every C++ tutorial book. for example in Stroustrup's "A tour of C++", it's covered in section 1.6. without any remark about shared_ptr and friends.


> If you'll tolerate my hypocrisy for a moment, here's my suggestion: try to avoid putting the const at the beginning like that

Or you could not do that, and put const in the least ambiguous place, and since you rarely have a 'type* const ' it works very well just using:

    const type*
for pointers to const types, and

    type* const
for const pointers to types, and entirely unambiguous.

> Option 1 is bad because: (a) it involves an unnecessary copy constructor invocation

Shouldn't rvalue move semantics kick in and prevent the copy?

Agreed. I'll add a note. There's still the slicing though.

For slicing, the general rule in C++ is whenever you have a value object, polymorphism does not apply. A value object is always whatever its compile-time type is - after all, it's just a value, it can't represent some other "thing" that's off in some other memory space. Polymorphism only applies with pointers and references.

Remember this rule and the throws behavior makes perfect sense, as do a lot of other C++ gotchas.

I would actually revise #1 to be to always use a smart pointer for heap-allocated objects. C++11 is 3 years old now after all.

Selection messed up in your website: http://imgur.com/YuowMEm

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact