The concept isn't terrible necessarily, but the execution and condescension sure is. And some of the rules are just pointlessly argumentative. Such as refusing to use <cstdio> and instead using <stdio.h> - those are officially documented to be different things with different behaviors. Blanket banning one of them doesn't improve simplicity, especially if you're blanket banning the "wrong" set of headers. Unless the goal is to also ban namespaces, but that's not called out as such (and namespaces are one of the least contentions C++ features - everyone seems to like them well enough).
Or similarly:
> Don't use anything from STL that allocates memory, unless you don't care about memory management.
So don't use std::vector? Or std::unordered_map? Or std::string? These are all perfectly fine classes, banning them makes no sense at all. Maybe the goal was to ban "hidden cost" classes like std::function, but rolling your own 'C-style' is a hell of a lot more complex & error prone (raise your hand if you've seen a C pointer callback that forgot to have a void* context or a mismanagement of said void* context...)
I think any attempt at defining a strict subset of a language is going to face some pushback like yours, but I do agree that while I agree with most choices this one seems strange. Rolling your own containers is boring and error-prone and won't have all the features of the STL.
Also, while iostream is a bit of a mess, having it be strongly typed means that it's, in my opinion, well worth using over cstdio.
One thing I was surprised not to see mentioned here is multiple inheritance. In my experience it gets really messy really fast.
Also I really recommend reading the linked archived Boost discussion about a geometry library, it's quite funny. It starts with:
double distance(mypoint const& a, mypoint const& b)
{
double dx = a.x - b.x;
double dy = a.y - b.y;
return sqrt(dx * dx + dy * dy);
}
And after a couple of pages of refinements ends up with:
> One thing I was surprised not to see mentioned here is multiple inheritance.
There's nuance to banning that since implementating multiple interfaces is technically multiple inheritance in C++. So I think I'd agree with you but with an exception for pure-virtual classes.
> Also, while iostream is a bit of a mess, having it be strongly typed means that it's, in my opinion, well worth using over cstdio.
> So don't use std::vector? Or std::unordered_map? Or std::string? These are all perfectly fine classes, banning them makes no sense at all.
Orthodox C++ bans exceptions, so if an allocation failure occurs within one of those classes, the only thing your program can do is abort.
That’s fine for some programs - Google’s C++ style guide takes this approach, as does LLVM. But for other programs, where aborting on allocation failure is not acceptable, there’s no way to use those classes without using exceptions.
> Google’s C++ style guide takes this [ no exceptions; libraries abort ] approach
Note the style guide says they’d rather use exceptions but had too much legacy code (and now must have a couple of orders of magnitude of it by now): “On their face, the benefits of using exceptions outweigh the costs, especially in new projects. However, for existing code, the introduction of exceptions has implications on all dependent code. … Because most existing C++ code at Google is not prepared to deal with exceptions [don’t use them]”
Google’s non use of exceptions is often used as justification for not using them either, by people who haven’t actually read Google’s style guide.
The only "real" issue with exceptions is binary size, and binary size does seem to be a rather specific goal of Orthodox C++ even though it's not explicit about that. But on any desktop or server deployment binary size is also so incredibly irrelevant it's not a good justification. On mobile maybe, but it's questionable. For embedded definitely, though.
Exceptions have hidden costs that are hard to spot. For example, generated assembly for C++ code with or without noexcept can be quite different. Part of the reason is that DFA based optimizations get overwhelmed by the number of implied execution paths when exceptions may be thrown. So the optimizer essentially is forced give up more often, resulting in slower code for the normal case. This is orthogonal to any binary size issues from linking the runtime support for exceptions.
Of course, but that's comparing error handling code with non-error handling code.
The comparison you need to make is instead exceptions vs. return values. Unless you're going to argue all errors should abort by default or similar.
I do wish noexcept had better support, or was even the default (C++ and wrong defaults, a tale as old as time). But it's disingenuous to compare exceptions against nothingness.
No, this holds in a fair comparison with manual error checking. In the presence of possible exceptions, the compiler has to make the assumption that any nontrivial expression can throw, especially if its implementation is opaque at the callsite. With explicit error handling, this is reduced to those checkpoints that are inserted manually, which are generally fewer.
If you call a function within the condition of an if statement, the compiler needs to conservatively assume that the execution flow from there is either to the rest of the conditionsl or to either one of the catch blocks or the functions' stack unwinding. If there cannot be an exception, the execution flow has none of these additional branches that bog down the DFA.
While interesting I guess that sounds like a straightforward compiler bug not a problem with the feature overall. Unless there's some missing nuance why 'noexcept' was apparently being ignored which would need further investigation.
> But for other programs, where aborting on allocation failure is not acceptable
Sure but that seems like the edge case not the general case? Nearly any desktop or mobile app or game won't have any such constraints, for example. And you've also got overcommit to deal with on those platforms, so it's not like your allocation failure will actually happen at malloc time either.
So outside of embedded, which tends to have a variety of constraints, what even pretends to handle allocation failures across the entire program?
Not really. You can supply alloctors to contai ers, but it becomes iffy for strings. Custom allocator support for std::function was removed in C++17. And there is no way that I am aware of to use custom allocators for the std::shared_ptr refcounters.
It is quite common in certain contexts to allow the user of a library to provide their own custom allocator. And this would be kind of pointless if some allocations circumvent that, wouldn't it?
The deeper rationale is that some systems like to assign fixed memory pools to subsystems. These may be reset or destroyed at certain points in the applications life cycle and the general expectation is that this frees all memory used by said subsystem. If you still want to use the STL in that context, you need to provide custom allocators for everything that potentially allocates.
But by doing that you'd be breaking the safety (and functionality) of shared_ptr as you'd end up with potentially dangling references (similarly weak pointers are broken). Which then raises the question of why are you using shared_ptr in the first place? You could use a custom allocator just for auditing that all outstanding references were released, I suppose, but actually releasing the memory anyway is just adding bugs. Which can be a fair risk to take, it just seems like shared_ptr is then not the thing you want to indicate appropriate "running with scissors" behavior.
As long as you can guarantee that no shared_ptr lives longer than its backing memory pool, it's perfectly fine. The same goes for every other kind of pointer wrapper you could dream up.
At least you can write a replacement for shared_ptr. The same is not true for std::function. This is the only named type a capturing lambda converts to according to the language standard and it's an STL type with complex behind the scenes behavior.
You can write your own std::function, too, nor is it the only STL type that can take a capturing lambda (std::packaged_task for example).
A capturing lambda is just a class with an operator(). It's complicated to do what std::function does, but fully possible.
In fact, custom std::function replacements have better lambda support than std::function itself. Such as unique_function in https://github.com/Naios/function2 which can handle non-copyable lambdas.
While I can't make a general comment, for some applications such as videogame engine development, std::unordered_map for example is not a perfectly fine class. Its linked-list-based memory layout means it is extremely CPU cache-unfriendly, making it incur an inordinately high amount of cache misses. [0] As you have a budget of 6ms per frame for 165 FPS, and you often need to iterate over many thousands of entities in each frame, std::unordered_map for one is a non-starter.
In general, when discussing which language features to use, I would keep in mind that something can work well for an application that is fairly straightforward and has modest performance requirements, while being a complete no-go for a different use case. I'm not saying we should throw C++ in the bin, but I also think many people have their reasons for saying some C++ features are more trouble than they're worth.
Still, it's not ok to just blanket ban them. If they're not efficient enough for certain high performance areas, just don't use them there. I don't see why, for instance, they can't be used along with more efficient implementations, for instance in areas where parallelism and memory locality are not that important (for instance, for a menu or to hold settings).
I feel that banning them altogether may lead people to implement stuff from scratch even when they don't need it, creating another possible source of bugs and vulnerabilities.
Sure but the argument against STL wasn't performance but rather that it can allocate at all.
So optimal unordered map implementations, like Google's absl::flat_hash_map or Facebook's F14 would be similarly banned as they internally allocate.
I'd definitely recommend those libraries over std::unordered_map for sure, but it's not like std::unordered_map is unusably slow or broken, either. It's fairly comparably to Java's HashMap that everyone uses without thinking about it.
> As you have a budget of 6ms per frame for 165 FPS, and you often need to iterate over many thousands of entities in each frame, std::unordered_map for one is a non-starter.
that's when you use one of the two hundred alternative implementations which keeps the same API but offer different performance compromises & tradeoffs :
If you prototyped with std::unordered_map you'll likely just have to change a couple types and add the relevant includes here and there, rerun your benchmarks, and tada.
Try not to read condescension into things so easily. Eg I'm pretty sure that the line you quote,
> Don't use anything from STL that allocates memory, unless you don't care about memory management.
is not meant condescending but literally. Ergo, if you don't care about memory management (which is fine), then feel free to use lots of STL stuff.
Case in point: for lots of embedded software, memory matters a lot and it's important that it's obvious to the reader of the code what memory gets allocated where. The STL classes you quote make this harder to see. But if you have lots of RAM anyway (ie "you don't care about memory management") then this does not matter much.
A key goal of this list appears to be, I quote, "Projects written in Orthodox C++ subset will be more acceptable by other C++ projects". This particular guideline makes it more likely for your code to be deemed acceptable by embedded programmers so it seems to fit. I don't think it was intended to be condescending in any way. I think the same holds for the other points.
It could've definitely been written a bit clearer though.
> is not meant condescending but literally. Ergo, if you don't care about memory management (which is fine), then feel free to use lots of STL stuff.
I don't see it that way. Plenty of people care about memory management and also see the STL as a fine tool to use. Little is gained from a memory management perspective by avoiding e.g. std::vector. Whether condescension is intended or not, this definitely comes across as talking down to me, and worse, it is incorrect.
> Try not to read condescension into things so easily.
If they didn't intend condescension, they should not have put a comic displaying exactly that (and with essentially no further relevance) into the middle of the article.
I don't think "more acceptable by other C++ projects" works out well. If I don't appreciate how RAII and exceptions clarify normal control flow, I won't write code that avoids leaking when its dependencies throw. This poisons any project it spreads to. Google famously had to abandon any hope of using exceptions because of a critical mass of flawed legacy code.
> Don't use anything from STL that allocates memory, unless you don't care about memory management.
The important part is "if you don't care about memory management".
In language benchmarks, C++ is often portrayed as less efficient/slower than C. C++ is also commonly shunned when it comes to embedded software, especially on low performance chips. It doesn't have to be! C++ is (almost) a superset of C, and generally, they use the same compiler backend, so your C code should compile on a C++ compiler and generate the same binary.
Now that you know you can write C++ running as efficiently as C, you can start to carefully add features that will keep the spirit and efficiently of C while making your life easier, or even improve performance. For example you may want to replace macros with templates, use proper objects instead of doing like stdio does with FILE*, use namespaces, etc...
That's exactly what Orthodox C++ is about. It is C++ for those who want to write C. There is absolutely nothing wrong with STL containers and smart pointers and all the fancy stuff that make up modern C++, there is also nothing wrong with using languages that have heavy runtimes and garbage collectors, it is just not the use case Orthodox C++ is addressing.
I think that's the good thing about the mess that is C++11 and beyond. You can pick what you want. You can stay low level and know exactly the memory layout of your program. Or you can choose not to have a single raw pointer and let it manage the memory for you.
Note: There are still a good reasons to use C over C++. A big one is that linkage is a lot simpler and more compatible in C. C++ compilers do name mangling to support things like namespaces and polymorphism and require the linker to understand their particular conventions, and you may need the right libstdc++ for your target. You also need to be aware of static initialization.
I was also disappointed by that post. I'm definitely not a fan of modern C++ but I wish they would have explained their reasoning in more detail. How is this different from the other C++ subsets that they list in the references?
The concept isn't terrible necessarily, but the execution and condescension sure is. And some of the rules are just pointlessly argumentative. Such as refusing to use <cstdio> and instead using <stdio.h> - those are officially documented to be different things with different behaviors. Blanket banning one of them doesn't improve simplicity, especially if you're blanket banning the "wrong" set of headers. Unless the goal is to also ban namespaces, but that's not called out as such (and namespaces are one of the least contentions C++ features - everyone seems to like them well enough).
Or similarly:
> Don't use anything from STL that allocates memory, unless you don't care about memory management.
So don't use std::vector? Or std::unordered_map? Or std::string? These are all perfectly fine classes, banning them makes no sense at all. Maybe the goal was to ban "hidden cost" classes like std::function, but rolling your own 'C-style' is a hell of a lot more complex & error prone (raise your hand if you've seen a C pointer callback that forgot to have a void* context or a mismanagement of said void* context...)