I think for a while Fortran was pulling ahead for large-scale numerical computing, but the mess of different options for parallelism (and also I think in our case a profileration of algorithm choices that made not just polymorphism but the explicit distinction between polymorphism and templates more important for code architecture) have brought C++ back into the limelight.
I know that some competitors in our space have a Fortran codebase and they essentially had to hack together C++ Templates as part of their build process (i.e. pre-process a template file and generate multiple sources which are then compiled).
Your parent poster is also right. Rust offers a lot of safety guarantees, a modern, well-designed language around it, package management, test and benchmarking integration, and extremely good type safety, while still mostly having zero cost abstractions for many things. If your goal is performance, control and safety, Rust is a much better choice than either C or C++. I'm still hopeful that it shall become the next big embedded language (mostly processor support missing here, but thanks to LLVM there's hope), especially since we haven't seen any other good attempt at GCless computing (no GC is pretty important for real-time systems).
But I must admit that with C++ you can probably eke out a few % more performance. The thing is though that with Rust, for 99% of projects where performance and no-GC is important, will be fast enough, especially due to safe abstractions, often faster than C++.
One example is if you have a lot of text processing to do. Often you can reuse a lot of the strings. In Rust you can safely handle this due to the borrow checker. Avoiding allocation of new strings is pretty much the biggest performance problem for most string processing.
Rust gives you a lot of safety when it comes to multithreading as well. It's perfect for creating little servers and using all your processors for performance.
These two together, and the lack of 2-3 day debugging sessions where you trace down that one tiny memory bug in C++, make Rust the biggest contender to replace C++. Of course if you only care about performance. C or C++ is probably still a slightly better choice, slightly.
And while the rest of the world knew how to find a file size... how long was it before C++ cleaned that very very simple corner up?
And if you look under the hood at the implementation of std::conditional.... I feel ill.
The thing I like about Alexandrescu is he invented this meta programming stuff... super super super clever magic...
...and then backed away saying it shouldn't have to be so hard, we shouldn't need magic to program.
If you must use C++, just use another language to generate the C++. All this template mumbo jumbo. Who says your meta-programming language has to be the same as your target language?
I used to spend some time looking for the perfect vector (as in linear-algebra) library. Templatized for vector dimension, data-type, blah blah blah. Dude, just use Python to generate it. Duplication doesn't violate DRY if the duplication is generated from a single higher-level source.
Doing meta-programming in C++ is a really limited way of thinking.
A pointless pin would indeed be pointless.
I'd lay blame at IDE's and they way the take over the build system along with intellisense. They don't play well with anything generated before the compile step. Even adding a build step is and alien concept to many, so we end up with the build step shoe horned into languages.
Overuse of if statements is generally something of a code smell and tends to make code harder to follow. To my taste the C++ way of doing things is generally preferable as a result even if more verbose. Given this is probably somewhat a matter of taste it is hard to justify changing C++ to match the D way IMO.
I mean, how many C++ developers actually write serious template code? How many of them could reliably explain what the keywords in post do?
The idea that every developer is a library author (or the lisp extension that every developer is a language author) is common in many other language communities but it relies on the community working hard to make mastery of the language feasible for lots of people. The C++ community never bought into that notion; they inherited a very stratified class structure from Bell Labs.
I think this is an unreasonable assertion and not borne our by a read of the committee discussions.
For example: maintaining backwards compatibility. The community believes that it is more important that 20 year old C++ code run unmodified than that the language should be simplified. There's lots of stuff you could do to simplify the language but options dry up in a world where 20 year old code must be able to run unmodified.
So sure, the committee talks a lot about simplicity, but it isn't willing to sacrifice much.
Don't get me wrong: I'm glad that finally, in 2020, C++ will be almost but not quite as good as Common Lisp was at metaprogramming back in 1982. But it remains the case that eval-when and defmacro are both more powerful and dramatically simpler than anything the C++ committee has ever considered.
Simplicity and comprehensibility were things the committee had to give up in order to pretend they had "zero-cost" abstractions. Nothing in life comes free: everything, including all abstractions, comes at some cost.
Yes. As a slogan, it is imprecise. But it's always been talking about a very specific kind of cost: runtime costs. You're 100% right about there always being some kind of cost, but the slogan doesn't disagree with you.
(Some prefer "zero-overhead principle" instead to make this a bit more clear.)
(2) Even reducing it to runtime costs, it seems a bit nonsensical. Are C++ exceptions a zero cost abstraction? All the googlers I argued with about them would insist that they have unacceptably high runtime costs.
OK, but templates are surely zero (runtime) cost abstractions, right? Unless you start to worry about duplicate code blowing out your instruction cache but if that's a problem, no profiler in the world will ever be able to tell you, so I guess you'll never know just how costly the abstraction is, so you might as well continue believing it is zero...?
When it comes to exceptions, it's generally true on x64 that you don't pay for what you don't use (there's no performance penalty to exception handling if you don't throw) although that hasn't always been true for all platforms and implementations. It's also generally true that you couldn't implement that kind of non local flow control more efficiently yourself, although the value of that guarantee is a little questionable with exception handling.
I'd argue that no language has a really good story for error handling. It's kind of tragic that we have yet to find a good way to deal with errors as an industry IMO. The most promising possible direction I've seen is in some of the possible future extensions to C++ - it's widely recognized as an area for improvement.
Template code bloat is another case of not imposing more cost than if you implemented it yourself and you have pretty good mechanisms in C++ for managing the tradeoffs.
I'm inclined to agree with you but just about everyone at Google says the opposite and most C++ shops I've seen agree with them. I've made this argument and lost repeatedly. So, it seems like the community can't even agree on which abstractions are zero cost (or maybe whether some zero cost abstractions are actually zero cost?). To the extent that the community itself has no consensus about these things, maybe they're not a marketing slogan that's helpful to use.
2. The other reply is right. Zero cost compared to the best possible implementation.
In this context 'zero-cost abstractions' refers to zero runtime performance cost and C++ comes closer to achieving that than most other languages. It doesn't mean zero compile time cost or zero implementation complexity cost but both of those things can end up better or worse due to design decisions and quality of implementation. Given that zero cost refers to runtime performance however, the committee is not 'pretending' they have zero cost abstractions.
It is true that simplicity and comprehensibility are not C++'s highest goals / values but they are not ignored or seen as having no value. Indeed they are major topics of discussion when new features are being considered. Sometimes they are in tension with or even in direct conflict with other goals but not always.
This pains me but every time I think "just toss XXX out, gddammit!" I think of IPv6. C++11 is still the most popular dialect of C++, even for new development I believe, and c++14 is the hot new thing to many people.
> Don't get me wrong: I'm glad that finally, in 2020, C++ will be almost but not quite as good as Common Lisp was at metaprogramming back in 1982. But it remains the case that eval-when and defmacro are both more powerful and dramatically simpler than anything the C++ committee has ever considered.
C++ is held back by having statements. If the basic structure were an expression a lot of programming, much less metaprogramming, would be simpler.
A library, specially a C++ library, is way more than reusable code. Developing a library requires the developer to spend time making fundamental design decisions that he doesn't have to make when developing a module lost somewhere within a project tree, such as how to organize the project into interface and private source files, how the lib should be deployed, how to meet upstream dependencies, how to not break compatibility with previous releases while making your code resilient to subsequent changes, how to add metadata to your project, how to handle optional features, etc etc etc.
Working in a large project, as you note, naturally leads to many conversations about sharing code and/or functionality. So you bundle something together or add an access point, call it "library X" and you are done, right? Any problems can be patched around later as you are working on a common base.
To me, this is missing > 50% of the work in designing and delivering a library for general use, which is why it often causes problems when you treat it as "done".
Which isn't to say this isn't the right thing to do in your situation! It's just that this is vastly different than what someone might be talking about in "library developer". It's not prima facie crazy to have language features mostly aimed at the latter, if it's a language often used for it. Which, for better or worse, c++ (still) is.
Similarly operator overloading via a string that tells you what you are overloading seems... insane? Very error-prone & complex?
Not that C++ is great here or anything, but it seems disingenuous to claim D's complex thing is for everyone while C++'s nearly equally complex thing is too complex for everyone.
Frankly you should try D for just 5 minutes and see for yourself, because no it is really sane and works well. Never seen anyone complain about this...
See here it is used to implement all operators for small vectors in 46 lines:
But you could still do that and not be string-based. It could (and should!) be an enum of the operator instead. opBinary takes a fixed number of ops, but the parameter type of string has an infinite number of values.
Whether or not the design of having a single operator overload method is a good idea or not is independent from what I'm specifically calling insane which is that the parameter type to that method is a string.
As an example, we needed locked containers, so wrote a little template and specialized it over the couple of containers we needed. It supported just what we needed. If this same functionality were extended to the standard container library it would not only have need to be thought out to handle every non-locking case, but would have either needed a lot of repetitive boilerplate (and repetitive specializations) or else additional hair that was not worth our while to learn/use. We were able to avoid the problem by adding some documentation in the local style guide.
Use what you need, learn what you need. Don't pay for what you don't use :)
A lot of these features seem like ways to hack around the fact that C++ templates are not generics, they are literally templates for writing copies of classes. It seems like features like this are going to make code size explode.
A lot of the examples also seem very smelly from an OOP perspective. We should one class be able to have different fields depending on template parameters? That seems like something one should do in a subclass.
Similarly, subclasses as opposed to templates carry a runtime cost. Virtual function calls are not free. Why pay that cost if you can deduce the right code from the types at compiletime? Besides, and this is probably a matter of opinion, I find code hard to follow that uses inheritance heavily.
An example of this is that generics are "reified". This is to say that you can treat each instantiation as a unique type at runtime. This is not the same as monomorphisation though, as there are rules used to promote code sharing and avoid overly aggressive specialization. For example, boxed type parameters will generally share all code paths until the JIT decides to specialize a specific call site. Value types tend to gain a lot more from immediate inlining since autoboxing (another peculiar feature of .NET) can be avoided in more cases, so these are usually specialized upon reification (much closer to what you'd see in a template, though we're still limited to type parameters here).
In Java, the lack of user defined unboxed value types makes these distinctions less attractive and thus you see they opt for type erasure and reliance on clever JIT heuristics. They do miss out on things like type-specific static class members but I've only found a few uses for this in .NET (and even then it tended to surprise folks, main case was for a type safe structured logging system with efficient runtime control of trace points).
Having said all of that, the code bloat of C++'s templates causes issues with compile time, error message comprehension, and to some degree, cache efficiency but in return the programmer gets almost complete control over all of the trade-offs mentioned above. One can have templates duplicate code, or use abstract classes as interfaces for virtual dispatch, or even a mix where the template derives shared instances for certain types. This "we can have it all" mentality is a burden that may eventually be addressed by making the common cases easier to comprehend, debug, and compile.
I'm not necessarily going to wait for C++ to change but it's interesting to watch it make its way towards new goals while other languages mature enough to replace it in certain cases (Rust obviously but there are others like Zig which I think are worth watching).
- Support 'n' dimensional vectors (2, 3 and 4 are the only ones commonly used in 3D graphics and games).
- Support a choice of underlying element type (at least float, double and int but it's handy to be able to support custom types like a fixed point or rational type too).
- Support operator overloading for natural expression of things like adding two vectors.
- Be very close or identical in performance to the equivalent hand written variant for every combination of dimension / element type (excluding optimizations for particular SIMD element widths etc.)
- Not be significantly worse for debugging than the equivalent hand written code (this is as much a tooling issue as a language issue).
It's possible to meet all of these requirements in modern C++ without using any particularly exotic metaprogramming functionality. It's impossible in C#. I don't know of any other language that meets these requirements as well as C++ although I'm not really familiar with the facilities offered by e.g. Rust.
There is no efficient and syntactically pleasant way to work with numeric types generically for example (I can't write 'a + b' and have it work for any types that provide operator+). That doesn't require C++ compile time duck typing (you could mandate something like C++ Concepts to specify a numeric interface, I believe this is sort of what Haskell typeclasses do) but it's easy, efficient and syntactically pleasant to do in C++.
The limitations of C# generics bite it in other ways too - the members of Enumerable are useful and the syntax is ok (not as nice as F# or C++ Ranges) but C# can't match the performance of C++ Ranges given the way they are implemented. They also get bitten by the operator problem - look at how Enumerable.Sum() has implementations for every built in numeric type and doesn't out of the box support your custom Numeric type.
And C# going all in on OOP is a part of that. It's reasonably clear now that OOP was mostly a big mistake and a dead end. C++ is less infected by going all in on it as it has always been a multi paradigm language.
This was in the mid 90s and things are a lot better these days but still, I had better macro support writing Lisp in the 1970s.
Lisp is so much easier. But current C++ is surprisingly expressive and generates pretty good production code. Instrumenting your Lisp with a lot of type declaration is pretty messy too.
Then yes, there is nothing as "proper" Macro. There is just an attempt to fix a limited language in the first place.
And it is something that C++ arrived to solve ( not that badly ) currently.