Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The approach of having options to optimize hot code paths with zero cost abstractions while still not having to worry about memory management in the vast majority of the rest of your code sounds like the best of both worlds to me (given that we only trade performance, not safety for convenience).



I write C++ exclusively and never worry about memory management.

I don't use smart pointers since shared ownership is a bad concept.

The problem of memory management is largely trivial.


The problem of memory management is largely trivial _if_ you are in a small clean opinionated private codebase without cruft, collaborators, third party code, ...? :)

Google et al have been working on sanitisers etc because, even in well kept codebases with strict coding standards that are rigorously applied in reviews, memory bugs do actually creep in.


Memory management is trivial if your problem is trivial. In the real world you have network connections that fail, third party libraries with other conventions than yours, multiple threads with their own lifetimes, memory mapped files, etc.


Third party code is a risk and should always be carefully managed and properly isolated.

This applies regardless of programming language.

Of course the web people and their "frameworks" is just another demonstration of how bad relying on third party code is.


Not true, taking on C or C++ dependencies has different potential consequences than taking on Java or Python dependencies


Which are?


Non-local bugs due to memory safety. If I have a function

    def f(x)
      return ...
in Python or Java, those functions will work regardless of what other modules I import. (modulo monkey patching in Python, though you can defend against that)

In C you don't have these guarantees -- foreign code can stomp on your code.

This is probably why C does not have an NPM-like culture of importing thousands of transitive dependencies -- because that style would just fall down.

Also a minor issue, but C doesn't have namespaces (for vars or macros), so it's a bit harder to compose foreign code.

Also compiler flags don't necessarily compose. Different pieces of C code make different portability assumptions, and different tradeoffs for speed.


So your argument is that C does not have strong module encapsulation, then you argue that Python does.

That is just plain false, since a Python module can trivially be tainted by what you import before, and the Python environment is widely known for its dependency hell.

Meanwhile C modules, once compiled, can be fully isolated from what you link against them, depending on build and link settings.

Non-local bugs is just a matter of sharing state across a module boundary. Memory errors is just a very small subset of the possible bugs you can have in a program, and preventing them doesn't magically solve all the other more important bugs.


Not disagreeing with your first paragraph and will add, that memory management mistakes happen to the best. But it is also probably true, that Google and others do this, because they know there will always be someone committing shit, no matter, whether they are at Google or another big company. So they want guarantees, not blind trust.


> _if_ you are in a small clean opinionated private codebase

This is actually an important point. I think all codebases can (and should) be split into small, opinionated, privately owned sub-codebases. This is why developing large scale projects can work even in languages like C. After all this is what that whole 'modularity' thing is about ;)

(it also implies that external dependencies need to be managed the same way you handle internal dependencies, as soon as you use an external dependency you also need to be ready to take ownership of that dependency)


Memory management is fundamentally a cross-cutting concern, so modules don’t help, unless you introduce some hard barrier (like copying everything at boundaries).


Modules work if they can operate without allocating or are generic over allocators. I don't really get why people think it's normal for e.g. a websocket decoder to insist on calling read, write, epoll, and mmap, if the user just wants to encode and decode messages.


There is no maximum size for a websocket message, so unless you want to force a size ahead of time, you might need to allocate to resize your buffer.

Or you could give the message in fragments to the user, but that immediately becomes a very inconvenient API.


Generational-indices also help to secure system boundaries. The memory is always owned and manipulated by a system, and the system only hands out generational-index-handles as "object references".

Arguably that's even a good idea in memory safe languages, it avoids tricky borrow checker issues, and also prevents the outside world to directly manipulate objects. Everything happens under control of the system.


If you witness the amount of effort/work/man-hours that is being poured into making memory management easier, I'd say it is far from a trivial problem.

If you witness the endless amount of bugs, many security related, which stems from the idea that people can handle memory, I'd say it is far from a trivial problem.

If you witness any modern language, a common design principle is to eliminate memory management. Which argues it is far from a trivial problem.


Eliminating memory management is silly, resource management is a core part of programming.

There is a reason C++ still reigns supreme even though it was built in the 90s.


Elimination is perhaps too strong a word, as you can't eliminate it entirely. But you can reduce its cognitive load by a large factor. The amount of code which is being written in a garbage collected language is a witness.

More manual memory management methods still have their place, because there are problems where you can't afford to use a garbage collector, or where it gets into the way.

C++ will be relevant for many years to come. It has way too much momentum as a language and too much software has been written in C++ to ignore it. I personally think Rust will eventually carve up a large part of its niche though, because I think it has a far better approach to managing memory.


> I don't use smart pointers since shared ownership is a bad concept.

I think you mean you don't use shared smart pointers? Or do you avoid unique_ptr too?


I use values.


You never use the heap?


I dunno about the GP, but it's a JPL guideline to never use dynamic allocation after initialization. So it's not unthinkable. I'd suspect that many microcontroller programs might have to be really careful about using the heap just because they just don't have the memory to allocate that much. https://www.perforce.com/blog/kw/NASA-rules-for-developing-s...


It's pretty easy in a lot of embedded applications to basically only have objects that live forever or are allocated on the stack. I usually aim for zero heap at all, and just have statically allocated objects for the 'forever' set (which makes it easier to see what's using memory). If you're careful you can also statically work out worst-case stack usage as well and have a decent guarantee that you won't ever run out of memory. If there are short-lived objects, a memory pool or queue is usually the best option (though at that point you do invite use-after-free type errors and pool exhaustion). I would say with this style it's extremely rare to have memory safety issues, but it's also not really suitable to a lot of applications.


Why is it not really suitable to a lot of applications?


C++ uses value type to mean either a scalar object (int, tuple<double> etc) or a container that manages heap memory for you, e.g. a vector of a value type. If you stay in that world you can basically ignore memory management.


Staying away from std::unique_ptr<T> and std::unique_ptr<T[]> while using std::vector<T> sounds kind of silly. The last one is a generalized version of the first two. So claiming you don't use the first two is really misleading.


vector is a regular value type, unique_ptr isn't.


I'm not sure how you define "value type" (it certainly isn't C++ terminology; are you coming from C#?) but in any case, this is a distinction without a difference. You can replace every use of std::unique_ptr with std::vector and just switch a few method calls (like using .data() instead of .get()) and you'd achieve the same effects, just slower. I'm not sure what the point would be though, other than to be able to claim that you don't use smart pointers.


But std::shared_ptr is also a value type :)


It isn't. If you copy a value and change the copy, the original is unaffected.


C++ classes in general are value types. The fact that instances might share a common resource does not change that. See also https://learn.microsoft.com/en-us/cpp/cpp/value-types-modern....


std::unique_ptr is smart without shared ownership. You not knowing this makes your claim that memory management is trivial much less credible.


I've been programming in C++ for 20 years and involved in the ISO C++ standards process for 12 years.

Of course I know unique_ptr.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: