Hacker News new | past | comments | ask | show | jobs | submit login
Mistakes to avoid with C++ 11 smart pointers (acodersjourney.com)
134 points by debh on May 15, 2016 | hide | past | favorite | 90 comments



Not mentioned is passing shared_ptrs as parameters by value. People do it all the time because "passing pointer parameters is as cheap as it gets, right?" But, shared_ptr's thread safety means that each copy involves a full trip past all cache levels to actual DRAM. 100+ cycles each. They are not pointers. They are small yet heavy objects and should be passed by reference. Copying implies assuming shared ownership and should not be done lightly.

Also, "if(!p->expired()) p->lock()->foo()" is not thread safe. The last shared_ptr could die on another thread between the test and the lock. Instead, what you want is "if (auto q = p->lock()) q->foo()"


Atomic memory accesses do not require touching DRAM in the general case. The cost of an atomic operation on a modern Intel CPU is closer to 20 cycles than 100. You will see higher costs if you have two processors trying to access the same shared object concurrently, of course.

Your point is still valid, though -- a shared_ptr should be copied only when you actually want to increase the reference count.

In general, I try to avoid shared_ptr unless I really need reference counting. unique_ptr is much easier to reason about and is much more efficient.


I agree, and I frequently use unique_ptr only, and then pass naked pointers to non-owning objects who need a reference. In this way, the code is clear: if it is a regular pointer, it is non-owning always.

Some might argue that using shared pointers is better for this application, but then you must also deal with potential reference cycles and add weak pointers to the mix eventually, and in addition to these concerns, handle any runtime hits for the reference counting.


Life is easy when there's an unambiguous single owner. Shared pointers are really for the harder situation where ownership is distributed. I prefer to use arenas if possible, when objects live in a complicated graph.


It is definitely better to use T& instead of shared_ptr<T>&. Passing references to either unique or shared pointers is just adding a layer of indirection you are never going to use, unless you intend to copy-construct the pointer object later, which sounds more like you need to rethink your object design if you want to do something that messy.


To be clear, I was not referring to references at all but rather to naked pointers you get by calling get() on a unique_ptr, and passing those to non-owning classes/functions that need them (including third-party APIs).


I generally prefer to pass by const ref. If function needs to make copy, it can always invoke copy constructor. If it doesn't then all is well and performant. I know there are optimizations that can happen if you pass by value and function intends to make a copy. But those are micro optimizations and I've always felt its not worth the confusion it creates to the caller as well as callee. Does anyone else follow this general rule of thumb?


I used to do the same thing, until I saw slides ( https://stlab.adobe.com/wiki/images/8/85/2008_06_26_classes_... ) for a talk ( https://my.adobeconnect.com/p53888531/?launcher=false&fcsCon... ) that pointed out that explicitly making the copy makes it much harder for the compiler to do copy elision.


Interesting point about shared_ptr. Do you think shared_ptrs should be passed by const reference or just reference ?


I'm a "const all the things!" coder in general. If you see something that's not const, that is documentation that you should expect your object to be modified, not that I was too lazy to type 'const' ;P


Unfortunately a const shared_ptr is like a const raw pointer: you can't change the pointer itself, but can still modify what it points to.


You can do const shared_ptr<const type>, it's like a const * const raw pointer.


Herb Sutter's GotW #91 talks about smart pointer parameters in detail and when to use const and reference:

https://herbsutter.com/2013/06/05/gotw-91-solution-smart-poi...


If you're going to pass it by const_ref, why not just pass the actual stored object by const_ref instead?


Edit: Ignore this comment. I shouldn't talk so much about something I use so little. Somehow I skimmed through http://en.cppreference.com/w/cpp/memory/shared_ptr/shared_pt... and thought that shared_ptr( const shared_ptr& r ); was missing.

>> That's a good point. If the function will eventually lead to some object retaining a reference, then you should use a non-const shared_ptr ref. The assignment into the object will do the copy constructor and that needs a non-const ref. But, you don't need to be making temp object copies along the way. If the function will not lead to something retaining a reference, then you shouldn't be passing the reference retention object to it. Just pass a direct ref to the target object.


> The assignment into the object will do the copy constructor and that needs a non-const ref.

Hum? What assignment into the object? Where I'd typically see taking a `const std::shared_ptr<> &` to signal "retaining ownership" would be something like this:

    class Foo {
    public:
        void AppendChild(const std::shared_ptr<Foo> &x) {
            children_.emplace_back(x);
        }
    private:
        std::vector<std::shared_ptr<Foo>> children_;
    };
Why should Foo:;AppendChild's signature be changed to a non-const ref?


It should be changed to pass by value, and then use std::move to move it into the children array. The reason is that if the argument is a temporary, you are making an unnecessary copy of that temporary, when you could just transfer ownership.

This also gives callers the flexibility to std::move into your argument, transferring ownership.

A good rule of thumb is, if you are going to unconditionally take ownership of an object, accept it by value.


You are correct. I had a brain fart. Edited my comment.


Typically you should pass raw pointers (constness depends on your use case...). I can't really think of a scenario where the method/function needs to be coupled with the pointer type.


I will often have functions that accept a unique_ptr as a parameter. This way, I can clearly encode that the object belongs to the callee now.


And if you are wanting to pass the raw pointer, you should probably pass by & instead as well. The only real "features" of raw pointers over references are the ability to reassign or delete them, and if you are passing something by reference, you do not want anyone doing any such thing to it 99.9% of the time.


Unless they are modified: const reference. Otherwise you cannot pass a temporary object as in:

  void foo(const std::shared_ptr<int>& p);

  ...

  foo(std::make_shared<int>(23));
This will not compile with

  void foo(std::shared_ptr<int>& p);
since a temporary object cannot be implicitly converted to a non-const reference, only to a const one.


The first mistake is using shared_ptr to begin with. It's often unnecessary (as the article says, one can often get away with using unique_ptr). When a unique_ptr is not sufficient, a shared_ptr often obfuscates the true owner of the object.

Every object should have one true owner that manages its memory. Anything else will lead to performance regressions and outright bugs.


> The first mistake is using shared_ptr to begin with.

Yes, definitively! Using shared pointers can hide bad design.

There are some cases when it is hard to choose the owner of data. Two examples come to my mind. The first are directed graphs: what owns nodes? The second example are parsers, sometimes AST subgraphs must be assigned to extra structures during further processing (type analysis, code generation and so on).


A rule of thumb that I use is, if you cannot see the owner, then you're probably focusing too narrowly. For a directed graph, which is a "sea of nodes" with no real ordering, I would have the "graph" object own all its nodes; for an AST, if it is truly a tree, then the parent owns its children, or otherwise see the directed graph case.


The solution with a graph object looks nice, but when you allow some graph operations, than subgraphs could be shared among many (new) graphs. So, another level of sharing. I can't find any better solution for this problem.


Disagree.

Last time I used shared_ptr in my code was for a 3D authoring app. Users can load/copy/paste 3D objects, I want a single copy on RAM and VRAM for clones of the same 3D object. If you ask me “why” — because VRAM is limited, and because PCIe bandwidth is a bottleneck.

I‘ve decided that scene node is the true owner of the mesh, and placed shared_ptr<mesh> in my scene node class.

Technically, I could instead decide that scene (i.e. “the graph”) is the true owner of all the meshes in all graph nodes, as you’re saying. Two problems:

1. I would write my own code for memory management and reference counting, for no good reason replacing a perfectly good standard library implementation.

2. Scene is not one true owner that manages the memory of the meshes because of different lifetimes. Users can add/remove stuff from the scene. When they do, both scene nodes and meshes are allocated/deallocated. When two objects are allocated + deallocated together, it’s often a telltale sign that one of them is the true owner of another one. The scene itself lives much longer.


I've been pushed into using shared_ptr when I want to be able to put the pointers into vector (and other containers).


Two minor comments:

1.) Mistake #7 Has been somewhat fixed in the TS by specializing shared_ptr for array types (similar to unique_ptr<type[]> syntax). What they said was true though for C++11.

2.) If the data within a shared_ptr is not owned by any particular object but is otherwise constant, use shared_ptr<const type>. They can be created from a shared_ptr<type> object. This helps with the aliasing problem mentioned in mistake #1

And a slight nitpick on #4 - I imagine the performance of make_shared depends on the library rather than the compiler itself (and I imagine the performance gain is minimal - it is usually used for readability). Also, C++14 add make_unique, which was overlooked in C++11.


make_shared allows the shared pointer to be constructed with its memory and metadata in a contiguous block. This improves cache efficiency and locality and mitigates heap fragmentation. Although make_unique primarily serves to improve readability, make_shared can reduce smart pointer performance overhead dramatically.

See http://stackoverflow.com/questions/24779929/what-happens-whe...

You are right in pointing out that the library determines whether using make_shared will improve performance.


The biggest mistake with these objects is over-using them. They are very helpful for internal memory management but they should definitely be avoided in APIs unless the point of the API is to do something like transfer ownership (e.g. factory method returning std::unique_ptr<T>).

Raw pointers continue to be fine in a wide variety of cases. Suppose the object model has something like a type T with APIs to access pointers to types A, B and C, the lifetimes of A, B and C are always tied logically to the lifetime of T and everything you do with the pointers to A, B and C is in the context of some T. This does not require special magic pointer objects to “share” A, B and C pointers because their safety is ensured transitively. So you might wrap T but you don’t need to wrap the others.

Besides, pointer objects are still annoying to use: sometimes they’re pointer-like (e.g. "if (somePtr)" works) and sometimes they’re not (requiring a ".get()"), and they seem to “infect” entire call chains. If you have previous API dependencies on boost::shared_ptr<>, you can’t easily replace those with std::shared_ptr<> either. Use them very judiciously, and always try the simplest thing that would work, first.


Good points ! I ran into some difficulty translating the boost::shared_ptrs to C++ 11 standard shared_ptrs a while back while adopting some open source tech for xbox.

Also you're spot on about people touting the smart pointers as a silver bullet for all memory issues leading to an overuse of these otherwise fantastic tools.


Mistake #10 contains a race condition when another thread destroys the last shared_ptr to pIceman (Top Gun fan much?) between the call to !pMaverick->myWingMan.expired() and pMaverick->myWingMan.lock().

Just use:

  if (auto p = pMaverick->myWingMan.lock())
    p->take_shower_with(pMaverick);
See http://en.cppreference.com/w/cpp/memory/weak_ptr/lock


This is why smart pointers via templates don't work well. The mold always leaks through the wallpaper. There's no analysis at compile time to catch those errors.

This is what Rust is for. You really need a borrow checker.


Let me try to identify which ones Rust catches:

"#1: Using a shared pointer where an unique pointer suffices."

The Rust analog would be using Rc or Arc when Box is enough. Rust has the same problem.

"#2: Not making resources/objects shared by shared_ptr thread safe"

Rust does NOT have this problem! You have to be explicit about which objects are thread safe (via Sync), and Rust will not allow you to transfer objects across threads without it.

"#3 : Using auto_ptr"

Naturally, Rust does not have this issue, which is strictly legacy C++.

"#4 Not using make_shared to initialize a shared_ptr !"

"#5 Not assigning an object(raw pointer) to a shared_ptr as soon as it is created"

"#6 Deleting the raw pointer used by the shared_ptr"

These are the same issue: in C++ you can construct a shared_ptr from an object you allocated with new; the reference count must then be allocated separately. Rust does not allow this, and avoids this issue, at the cost of some flexibility (like custom deallocators).

"#7 Not using a custom deleter when using an array of pointers with a shared_ptr"

AFAIK Rust doesn't have this issue, because it doesn't have C++'s array-decays-to-pointer semantics. In Rust the size of the array is part of the type; to be fair, this loses a bit of flexibility, but C++'s semantics here are insane, so point goes to Rust.

"#8 Not avoiding cyclic references when using shared pointers !"

Rust has this problem!

"#9 Not deleting a raw pointer returned by unique_ptr.release() "

Rust has an analog in Box::into_raw(), which is the same issue.

"#10: Not using a expiry check when calling weak_ptr.lock()"

Close one: Rust has the same issue, in that you can call rc::Weak::upgrade().unwrap(). unwrap() is often used with some impunity, especially with locks. However you do have to be explicit about your use of it, while in C++ it's easier to forget. Let's give this one to Rust.

Let me give one more point to C++:

"#11: Accessing an object through a Rc/Arc/RefCell while a caller holds a mutable reference."

This is strictly a Rust problem: because there cannot be more than one extant mutable reference to an object, you have to be careful to not call out to something while holding a mutable reference. This is a doozy.

Total score: There are 7 unique to C++, 1 unique to Rust, and 3 that are shared.

Note that none of this has to do with the borrow checker! Indeed, the whole point of reference types like Arc/Rc/RefCell are when your lifetimes are dynamic, and you need to avoid the borrow checker, which assumes a statically knowable lifetime.


Your points are technically correct, but I thought I'd add that Rust offers mitigations for these issues in practice:

1. Using shared ptr where unique ptr would do: This happens a lot less in Rust because (a) in C++, shared ptr predated unique ptr, so you see a lot of legacy code needlessly using shared ownership; (b) Rc/Arc prevents mutation without jumping through hoops (RefCell/Mutex), discouraging its use for ergonomic reasons. Additionally, even when it does happen the reference counts are non-atomic on Rc and have to be twiddled manually, so the overhead is much less than with copy constructors which can be invoked silently and often.

2. Reference cycles: This is a real problem. It's somewhat mitigated by the fact that you need a RefCell or Mutex to construct the cycle. If those are absent, no cycle.

3. Box::into_raw: Well, sure, but this method is really rarely used, due to the lessened need for legacy interop. It probably wouldn't make a list of the top 100 mistakes made with smart pointers in Rust, much less the top 10.

4. Double dynamic borrows: This does happen, but you need RefCell or Mutex to do it. It's not really a smart pointer issue at all. In any case, it's there to enforce memory safety, which C++ doesn't even try to; it's not like C++ does this better than Rust.


The borrow checker is what makes unique pointers in Rust unique. In C++, you can pass ownership but still reference the unique pointer you moved from. It should crash at run time, deferencing null, but the compiler will happily compile it.

Can you actually cause #11 in Rust? Example?

Circular references remain a problem, but they can only cause memory leaks, not security errors or crashes. It would be nice to have a Rust static analysis tool for circular references. Often, you can prove their absence for a type.


Perhaps you consider this splitting hairs, but I would say this isn't the borrow checker at work, because nothing is being borrowed! Instead it's Rust's move semantics, which is separate from the borrow checker. We can envision C++ with no borrow checker, but Rust-style move-semantics-by-default, and it would be a better language for it.

An example of #11:

    use std::cell::RefCell;
    fn main() {
        let c = RefCell::new(5);
        *c.borrow_mut() = *c.borrow() + 1;
    }
this is equivalent to * c = *c + 1, which is conceptually safe, but crashes at runtime in Rust.

A real world example of this: say you are keeping a count of some event in a global or thread local storage. You pull out a mutable reference, so you can update the count more efficiently. You then call a function which accesses that global. Crash!

Rc/Arc have the same issue. This defeats the point of refcounting: it means you must have global knowledge of the refcount in order to reliably call mutating functions on the contents. But if you have that global knowledge, you wouldn't need refcounting in the first place!


  > but crashes at runtime in Rust
Because "crash" is ambiguous, I'll note that this code example panics at runtime (as opposed to segfaulting).


Mistake 10 has a race condition. The shared pointer could be cleaned up between the weak pointer expiry check and lock. Why not just lock and check for null before using it?


You're right. These were all good tips except for #10, where the "solution" is not thread-safe. Checking for null is better.


His last fix is buggy:

    if( !pMaverick->myWingMan.expired() )
    {
        cout << pMaverick->myWingMan.lock()->m_flyCount << endl;
    }
The problem here is it will still crash if another thread destroys the object between line #1 and line #3.

Here’a good fix, from http://en.cppreference.com/w/cpp/memory/weak_ptr/lock

    if( auto observe = weak.lock() ) {
        std::cout << "\tobserve() able to lock weak_ptr<>, value=" << *observe << "\n";
    }
    else {
        std::cout << "\tobserve() unable to lock weak_ptr<>\n";
    }


Yep - thanks for calling this out. I should have put out a version that's safe in multithreaded environment. I've updated the article to reflect this.


As someone who uses more C than C++, I have often wondered about the value of "smart" pointers and other pseudo-automatic memory management abstractions, since in any case one has to understand the traditional manual memory management to use them correctly, and if they do, they then also have to understand the additional semantics of the class implementing it too --- or bugs like the ones shown in this article will result.


Unique_ptr offers many advantages:

1) Exception safety: what if you throw between your new and delete? Unique_ptr will clean up for you anyway and leave you in a stable state.

2) What if you forget to delete or your delete code never runs? Any of the many problems associated with explicitly writing delete can be non-issues with unique_ptr.

3) Clarity. When you see regular pointers in C or C++, it is never immediately clear who owns them. With unique_ptr, you don't have this problem because it doesn't look like a pointer. If you therefore see pointers, you know immediately they are non-owning.

4) Convenience functions like reset() on unique_ptr handle the release of the old and allocation of the new simultaneously. In general, the less code you have to write is often the better path to safety and maintainability.

5) Unique_ptr handle resources other than memory. It can be used to handle file streams, DB connections, etc and automatically close those also, if you provide a custom deleter function (which can just be a lambda, for elegance).

6) Unique_ptr is not copyable. This is important: if you own resources, it force you to consider this when you copy objects, and the compiler will force you to write your own copy constructor. Without this, it is easy to get into trouble if you forget these things with raw pointers.

7) Many syntactic improvements: i.e. unique_ptr will automatically call delete or delete[] based on what needs to happen, you can't accidentally get it wrong.

This is just a start list. The value of unique_ptr is considerable.


Most important advantage: RAII. You cannot forget to free the pointee even in the presence of exceptions. The "bugs" described in the article are just a case of not having read the manual. (Like, who in their right mind would assume that wrapping something in a shared_ptr would make it thread-safe?!)


I totally agree. In most software projects, memory management isn't a particularly difficult aspect of the problem. With todays software tools, if your code contains memory leaks or pointer bugs you're just being an amateur. And in those cases where memory management requires a bit more thought and design it's rather unlikely that some standard smart pointer protocol is going to be a glove-fit solution. So I never saw the point of smart pointers unless you want to reinvent Python or something.


I develop software since the mid-80s.

Never been part of a C or C++ project at the enterprise level, where any single developer could answer any question about memory management tracking.

Tracking down crashes due to memory mismanagement was always a several days task.


If I remember correctly, C++11 smart pointers need more maintenance when being passed on a message pump across threads, particularly from my memory of using it on Windows.[0]

When sending/posting a smart pointer to a different thread's message pump via a function like PostThreadMessage, the sending thread's smart pointer would go out of scope and the memory would be released because the receiving thread hadn't increased the reference count. By the time the receiving thread's message pump processed it, the pointer had already gone out of scope.

If you are using the producer-consumer pattern, most likely you'll have to use unique pointers and call release manually; I guess that's still better than using new and delete, right? Or perhaps I could try using SendMessage instead of PostMessage and read the return value to determine if the pointer got to the other side (for some reason, that wasn't an option to me).

[0] http://stackoverflow.com/questions/11290912/passing-objects-...


I assume that postthreadmessage has a C abstraction. In C++ you should use a queue that properly supports copyable and movable types. Or at the very least write a wrapper on top of postthreadmessage that does.


Mistake #2 is just bizzare: who in their right mind could assume that wrapping an object in shared_ptr makes it threadsafe?! And the mistake is worded as if you should make these object thread-safe, which is just as bizzare.


So if I'm coding up a LinkedList or a Binary Tree today, instead of using a raw pointer or shared_ptr, should i be using a unique_ptr to prevent circular references ?


You should be using a raw pointer, period. The list/tree destructor must be responsible for traversing and deleting the whole structure.

If you have a linked list of, say, 10M elements linked by unique_ptr, deleting the head of the list would cause recursive [1] destruction of all elements in the list. Stack overflow, CRASH, BOOM, BANG!

[1] Take a simple example of list A ->a B ->b C ->c 0 where arrows are unique_ptrs owned by the nodes. Deleting A must invoke the destructor for ->a , which will delete B, invoking the destructor for ->b, which will delete C invoking the destructor for ->c which is null, thus finishing the recursion.

EDITS: added labels to pointers for clarity.


You could (and imho should) use RAII semantics using unique_prts for this. Writing custom destructors will introduce unnecessary code that you'll have to maintain and unnecessary bugs that will creep in eventually.


This wreaks havoc on the ownership semantics. Smart pointers are not a good fit for pointer-based data structures. Just have the list destructor clean it up.


Which part of "recursion", "stack overflow" and "crash" did you not understand? Do you think std::list uses smart pointers?


You are getting downvoted, but you are completely right. When implementing node based containers it is perfectly alright to use raw pointers as the nodes in no way own their children or siblings but they are all collectively owned by the datastructure.

You should of course use smart pointers for the automatic pointers that temporarily hold node references in functions that manipulate the datastructure.


The downvotes are for snark, not incorrectness the of content, I'm fairly certain. Such aggressive replies are not appreciated on HN. I find that this makes HN a much more pleasant community than some others.


The reply is that way because the OP didn't seem to acknowledge that using unique_ptr in this case has serious problems. He still insists on using unique_ptr, reasoning boiling down to "you should avoid writing code because you may create bugs". As if slowness or lurking crash caused by not writing code were not a bug in itself.


Oh I know, using unique_ptr to implement linked list is a terrible idea. Having an element be owned by its predecessor is a very weird idea, not to mention that it wouldn't work at all for a doubly linked list.... I'm just saying that your comment could have been written in a nicer way.


For a doubly linked list, I think the right approach is to store a unique_ptr pointing in one direction and a raw pointer pointing in the other direction. It is then understood by convention that raw pointers should never be deleted.


I think for self referencial data structures, passing them by a const& unique_ptr is a reasonable way to do it.

Thanks, Deb.


I think this is not right -- since references aren't reseatable, this would effectively make the data structure (but not necessarily its contents) immutable.

Shared pointers or raw pointers are the options that make most sense, in my opinion.

Edit: forgot to say, if you use shared_ptr beware circular references. e.g. in the case of a doubly linked list. weak_ptr could be used to break the cycle, but I'd probably use raw pointers here.


shared_ptr forward, weak_ptr back.


I'm not completely understanding the recommendation for #5:

Recommendation: If you’re not using make_shared to create the shared_ptr , at least create the object managed by the smart pointer in the same line of code – like :

shared_ptr<aircraft> pAircraft(new Aircraft("F-16")); </aircraft>

I think he really means to say that the second shared_ptr is better initialized from the first shared_ptr instead of the raw ptr.


Ah, so C++ pointers, even with "smart pointers" is still complicated and massively error prone.

I thought that's sort of what they were meant to fix. Apologies, haven't coded in C++ for several years (when auto_ptr still roamed the Earth). But it doesn't inspire confidence reading things like this that we've made much progress...


I might be biased, but I don't know that you can really call it massively error prone. Certainly, if you want to break the rules C++ will assume you have a good reason for it and let you shoot you in the foot all the way through the Earth's crust, but the smart pointer rules are actually fairly simple this time.

For example take the first mistake: "using a shared pointer where an unique pointer suffices". Unique pointers and shared pointers mean completely different things, if you know what they are, you would never think of mixing one for the other.

If I had to make an analogy, this is like "using a video with a single frame when an image suffices". The video may work, but that's not at all how you're supposed to use it, and surely, anyone who knows what a video and an image are would not make that mistake!

You'll notice that most of the remaining mistakes here are with the shared pointer and weak pointer. Thankfully, in the overwhelming majority of cases you don't need it at all, sticking to the very simple unique_ptr is what you want, and unique_ptr is practically FootProof™.


I see your point of view, and I agree that in the right programmer's hands, it's fine. Unfortunately, as we've seen repeatedly, people over estimate their pointer abilities and it leads to all sorts of memory and security issues.

I figure the C++ smart pointers should be as easy to use as possible, without caveats vs. C style pointers or whatever.

Maybe that isn't practical in the C++ language design right now, but that's just how I see it.

cheers


For almost all use cases, the issues mentioned don't even come up. You create a unique_ptr with std::make_unique, or a shared_ptr with std::make_shared. It lives its entire life as that type, and when you no longer need it, you forget about it.

The issues in the article had me banging my head, because they aren't how smart pointers are used in practice. The only time that you would call release(), for example, is when you are passing a pointer into a pre-C++11 API. Once you have done so, you wouldn't be worrying about calling delete anyways, because you have transferred ownership.


If programmers can't be diligent enough to learn simple unique pointers, they won't bother being diligent about any of their code.

So you may design a better pointer for them (if possible) and still have a huge spaghetti ugly bug-ridden insecure unmaintainable codebase... But at least the pointers are easy!

Even of you make them program in a rubber room helmet language, they are going to cause just as many problems.

Tools like unique_ptr are meant to let the good programmers work more expressively and make fewer mistakes. They aren't meant to let random folks write perfect code without thinking and learning first.


They're as "massively error-prone" as garbage-collected references in other languages.

> #1

Performance aside, unique_ptr gives you something that most languages don't have, unique ownership enforced at the language level. This makes code less error-prone. Disregarding #1 just gives you what you get everywhere else.

> #2

Basically just says, if your object isn't thread-safe, don't use it concurrently. Again, nothing specific to C++.

> #3, #4, #5, #6, #7

Well duh, don't use manual allocation or the deprecated auto_ptr in the first place.

> #8

That's the only real gotcha, and the reason why weak_ptr exists. It's the price for having RAII and destructors (because when the destructor is called, all your class members still have to be alive).

In return for keeping your pointer graph cycle-free, you get actual automatic resource management that isn't limited to memory.


Mistake #0:

Use smart ptrs (especially shared_ptrs) instead of having clear memory ownership contracts and incorporating memory ownership into your design. Two objects each having a shared_ptr to the same object as a member should raise red flags.


>instead of having clear memory ownership contracts

You have a good point for shared_ptr, but unique_ptr offers the best clarity for ownership, much better than raw pointers, since owning and non-owning pointers all look the same, but a unique_ptr that hands out regular pointers to its memory makes it clear in the code who owns what.


Can anyone point to a similar article for using C++ casts?


The little social media bar, when it pops up (which appears to happen at random) unavoidably obscures the left side of the text when reading in landscape mode. Please consider removing it.

Also, please read up on scroll jank and fix the javascript event handlers (or remove them -- why are you handling scroll events at all?).


Thanks for reporting this. Someone else noticed it on a nexus tablet as well. I'm looking into getting this fixed.


#11: Do not create two shared_ptr from the same raw ptr! It creates two control blocks and reference counts.


That's #5.


Oh, I missed that one.


Should have used a unique_ptr. ducks


Shudder, shudder, C++ 11 seems as bad a monster as C++ always has been...


mistake #1: not using Rust instead!

(I learned that on HackerNews - didn't write a single line of C++ or Rust in my life, though)


Why everyone hating new/delete so much? Instead of this headache with smart pointers and all complicated stuff they bring, clear and easily understandable memory management.


Because as a computer scientist the last thing you should ever want to do is to do something manually. Because when you do something by hand, every time - especially something as simple as heap allocated objects - you will do it wrong all the time. And memory management is complicated, even if the primitives are not. So when you combine something one level of indirection away complex with a ton of boilerplate you get constant problems, which 25 years of C++ have demonstrated repeatedly in the context of raw memory operators.

These smart pointers, in reality, are not a headache. You want to make a heap object? make_unique. You want to pass it around? Go for it. You want to pass ownership? std::move it. And only in the extremely rare case where you have multiple objects of non-deterministic lifetime that need to both access the exact same data structure, then you pull out shared_ptr and deal with the synchronization.

But unique_ptr, by itself, is probably the most productive addition to C++ period. More than the STL, more than templates, if you use it correctly and design your code properly, it can replace every heap allocation and drastically reduce code complexity.


Or you could just use lisp and not have to worry about any of this stuff ;-)


And forget about developing games and drivers and OSes and...


Once upon a time Lisp was used to write OSes and games....


The game Abuse (now open source) uses/used Lisp as its scripting language: https://news.ycombinator.com/item?id=516853


I have spent quite a few long nights playing it. :)





Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: