Hacker News new | past | comments | ask | show | jobs | submit login
All C++20 core language features with examples (oleksandrkvl.github.io)
483 points by dgellow 12 days ago | hide | past | favorite | 419 comments





I'm probably going to make a few enemies with this opinion, but I think modern C++ is just an utterly broken mess of a language. They should have just stopped extending it after C++11.

When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?

Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.

I know the intentions are good, and the argument is that "it's intended for library developers" but how much of a percentage is that vs. just regular app/backend devs? In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.

Meanwhile it's 2021 and C++ coders are still

- Waiting for Cross-Platform standardized SIMD vector datatypes

- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

- Debugging cross-platform code using couts, cerrs and printfs

- Forced to use boost for even quite elementary operations on std::strings.

Yes, some of these things are hard to fix and require collaboration among real people and real companies. And yes, it's a lot easier to bury your head in the soft academic sand and come up with some new interesting toy feature. It's like the committee has given up.

Started coding C++ when I was 14 -- 22 years ago.


> - Waiting for Cross-Platform standardized SIMD vector datatypes

which language has standardized SIMD vector datatypes ? most languages don't even have any ability to express SIMD while in C++ I can just use Vc (https://github.com/VcDevel/Vc), nsimd (https://github.com/agenium-scale/nsimd) or one of the other ton of alternatives, and have stuff that JustWorksTM on more architectures than most languages even support

- Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

what are the other native languages with a standardized memory model for atomics ? and, what's the problem with using libraries ? it's not like you're going to use C# or Java's built-in threadpools if you are doing any serious work, no ? Do they even have something as easy to use as https://github.com/taskflow/taskflow ?

- Debugging cross-platform code using couts, cerrs and printfs

because people never use console.log in JS or System.println in C# maybe ?

- Forced to use boost for even quite elementary operations on std::strings.

can you point to non-trivial java projects that do not use Apache Commons ? Also, the boost string algorithms are header-only so you will end up with exactly the same binaries that if it was in some std::string_algorithms namespace:

https://gcc.godbolt.org/z/43vKadbde


Most of what you said is a fair retort, but boost isn't quite as rosy as you make it seem. It's great but it has serious pitfalls which is why many C++ developers really hate it:

A) Boosts supports an enormous amount of compilers & platforms. To implement this support is an enormous amount of expensive preprocessor stuff that slows down the build & makes it hard to debug. B) Boost is inordinately template heavy (often even worse than the STL). This is paid for at compile time. Some times at runtime and/or binary size if the library maintainers don't do a good job structuring their templates so that the inlined template API calls a non-templated implementation. The first C++ talk I remember talking about this problem was about 5-7 years ago & I doubt boost has been cleaned up in its wake across the board. C) Library quality is highly variable. It's all under the boost umbrella but boost networking is different from boost filesystem, different from boost string algorithms, different from boost preprocessor, boost spirit, etc. Each library has its own unique cost impact on build, run, & code size that's hard to evaluate a priori.

Boost is like the STL on steroids but that has its own pitfalls that shouldn't be papered over. Maybe things will get better with modules. That's certainly the hope anyway.


> which language has standardized SIMD vector datatypes ?

Java is getting it soonish. https://openjdk.java.net/jeps/338

Rust has it (but it's fairly platform specific) https://doc.rust-lang.org/edition-guide/rust-2018/simd-for-f...

Dart has it https://www.dartcn.com/articles/server/simd

Javascript has it https://01.org/node/1495

It's actually a bit impressive how many languages have it at this point.

> what are the other native languages with a standardized memory model for atomics

Rust, C, Go?

> It's not like you're going to use C# or Java's built-in threadpools if you are doing any serious work, no ?

Define "serious". By most metrics JVM apps run at 1->2x the speed of C++, that's really not terribly slow for a managed language. On top of that, there are a lot of places java can outperform C++ (high heap memory allocation rates). Java's threadpools and concurrency model is, IMO, superior to C++'s.

> Do they even have something as easy to use as taskflow

Several internal and external libs do. Java's completable futures, kotlin's/C#'s (and several other languages) async/await. I really don't see anything special about taskflow.

> can you point to non-trivial java projects that do not use Apache Commons

Yes? It's a fairly dated lib at this point as the JDK has pulled in a lot of the functionality there and from guava. We've got a lot of internal apps that don't have Apache commons as a dependency. I think you are behind the times in where Java as an ecosystem is now.


... I just checked your link and wouldn't say that any of these languages have SIMD more than C++ has it currently:

- Java: incubation stage (how is that different from https://github.com/VcDevel/std-simd). Also Java is only getting it soonish for... amd64 and aarch64 ??

- Rust: those seem to be just the normal intrinsics which are available in every C++ compiler ?

- Dart: seems to not go beyond SSE2 atm ? But it looks like the most "officially supported" of the bunch

- Javascript: seems to be some intel-specific stuff which isn't available here on any of my JS environments ?

* Standardized memory model

- Literally false for Rust : https://doc.rust-lang.org/reference/memory-model.html

- The C11 one directly comes from C++: https://stackoverflow.com/a/8877562/1495627

- The Go one does not seem to support acquire-release semantics, which makes it quite removed from e.g. ARM and NVidia hardware from what I can read here ? https://golang.org/pkg/sync/atomic/


> which language has standardized SIMD vector datatypes ?

D does:

https://dlang.org/spec/simd.html


That's quite well thought out; without the compile-time checks for operations existing, you end up with code either needing to target a very small subset of the operations that are widely supported or something that is not really cross-platform -- I've seen too much of the following using what is theoretically portable code because software-fallback will typically be an order of magnitude worse than using a different set of datatypes and operators

  #if defined(__NEON__)
  "portable" SIMD goes here
  #elif defined(__ALTIVEC__)
  different "portable" SIMD goes here
  ...

There was some discussion about what to do with vector types and operations that weren't supported by the hardware. We decided on compiler error instead of emulation, because the emulation would be terribly slow and the user may be unaware that he's getting emulation.

With a compiler error, the user unambiguously knows if the SIMD hardware is being used or not.


> We decided on compiler error instead of emulation

DMD does that, LDC (based on LLVM) does the sensible thing and vectorises with the widest available native SIMD. No idea what GDC (based on GCC) does.




> what's the problem with using libraries?

Spot the difference:

C++: Conan (barely adopted), vcpkg (barely adopted), single header file libraries (!!!)

Java: Maven (de facto standard), Gradle (compatible with Maven), Ivy (compatible with Maven), heck Ant (compatible with Maven).

C#: Nuget.


I hope they keep going down this path and make it into a real mess of a language, so that people can finally stop pretending C++ is the solution to any problem, when it is in fact the cause of a lot of your problems.

I began C++ coding over 20 years ago as well, and it required reading thick books even then. I remember my class mates at Uni really hated software development all because of C++. It was way too hard as a beginners language, even 20 years ago.

I look at all these new features, and I am like: How on earth are you going to teach all this crap to students?

They have painted themselves into a corner. It becomes a language only for those who have already programmed it for 10-20 years.

This idea, that it is only for library developers is a bunch of crap. A lot of learning a language is really about reading the code for the standard library. That was one of the beauties of writing Go code. You regularly look at standard library code and is even encouraged to do so. It teaches you a lot about good style.

Same deal with I program in Julia. Looking at library code is totally normal and common.

Except in C++. I avoided looking at library code like the plague. And I suppose, now it will only get worse.

The worst part of this is that this isn't just a problem for C++ developers but also for everybody else. So many key pieces of software relies on C++ code. It becomes ever harder to migrate that code or interface with that code as C++ complexity grows.

That was the beauty of a language like Objective-C. Unlike C++ it is a fairly simple language which you can interface easily with. The result was that porting to Swift was really easy. When porting iOS apps to Swift I could pick individual functions and rewrite them to Swift.

There is no hope doing anything like that with C++.


> I look at all these new features, and I am like: How on earth are you going to teach all this crap to students?

You don't. You teach "A tour of C++ 2nd edition"[0] which presents a clean and smaller subset of the language people can wrap their mind around, with everything someone new to modern C++ needs to know to be effective. And you supplement this with "C++ Core Guidelines"[1] which can be enforced by code analysis and provide some examples of common mistakes or questions people might have.

You do not need to know all the details of the language and know every single features. And wouldn't teach everything to a student.

But it's true that there is some overhead due to the complexity of the language.

[0]: https://www.stroustrup.com/tour2.html

[1]: https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines


> I'm probably going to make a few enemies with this opinion, but I think modern C++ is just an utterly broken mess of a language. They should have just stopped extending it after C++11.

This is the popular refrain of the day, so I don't know why you cage this as if you're saying something controversial.


The popular refrain has more to do with the lack of memory security features in the language, although I'm sure they will bolt a borrow checker or something on to the language.

There are currently enclaves of developers who know varying versions of C++. There's a good chance that a 20-year C++ veteran would have to consult the documentation for syntax. That's concerning. Defining what something isn't is nearly always more important than defining what it is, and C++ is seemingly trying to be everything.


This is a popular (and increasing) trend in HN comments.

"It is a poor craftsman who blames his tools."

This is a common saying because it is a common occurrence.

People who use the language effectively know all about the complaints. Those people live with their complaints knowing no other language even comes close to meeting their needs. No language on the horizon is even trying to meet their needs.

C++ usage is still growing by leaps and bounds. Attendance at ISO Standard meetings is soaring; until Covid19 killed f2f meetings, each had more than any meeting before; similarly, at conventions. Even the number of C++ conventions held grows every year, with new national ones arising all the time.

Rust is having a go at part of the problem space, and making some headway. But more people pick up C++ for the first time in any given week than the total who have ever tried Rust. It is still way too early to tell whether that will ever not be true.

So the HN trend is very much an echo-chamber phenomenon, with no analog in the wider world.


> "It is a poor craftsman who blames his tools."

> This is a common saying because it is a common occurrence.

Ha ha. This is not applicable for software, and I assume, for some craftsman.

What's the percentage of software developers that actually get to choose their tools? 40%? 60% at best? Though most likely it's just 20%.

Most projects are pre-existing, it's only natural. You can't create more projects than those already in existence, once a field matures a bit. Which means that you have to use what's already there.

Plenty of people are forced to use bad tools. And they can for sure blame them.


Many craftsmen do not get the tools they could wish for.

Your craft is your personal responsibility; you use your tools, they don't use you. So, your product is the result of what you do, not what your tools do. Limitations of your tools leave you with greater responsibility to ensure results that satisfy whatever standard you work to.

Blaming your tools for bad results tells people much more about you than about the tools.


First of all, we are not craftsmen. We are more like factory workers. Ford factory worker #515 had no say in the 1000 ton machine just installed in the factory. He just had to make his part of the car.

We delude ourselves into thinking we're all Picassos when we're just house painters, at best.


That also is a matter of choice. Curiously, the more you get paid, the more latitude you get.

It'd be more accurate to say not many of us are craftsmen. (Craftspeople?) There are still some ways to make money by through creative, open ended development, they've just always been on the rare side.

As long as trading firms use C++ and some research places, it won't go away.

These places also have all the resources as well.


Trillions of lines of existing code are also a strong argument of why C++ is going to stay for a while. Lot's of good C++ programmers I know would be really excited to use Rust, but the interop with legacy systems is not worth it for many use cases.

True, but there's plenty of Stockholm Syndrome as well. C++ is a mess, and there's people that will defend that mess to the end of times. Those people managed to get pretty good and have a deep understanding of all of its quirks, but lack the ability to take a seat back and admit that yes, nobody without masochistic tendencies would get into C++20, unless they're already familiar with it.

> except for a small circle of language academics

I'm sorry but can we stop hating on "academics"? No one in research matches your description. The intersection of academia and C++ contains only practitioners (like in the industry), who just want their code to work; and maybe some verification people who'd rather wish C++ was smaller because it is a hell of a beast to do static analysis on. Both these categories are real people having real use cases. The programming language crowd is generally more interested in stuff like dependent types or effect systems, not templates.

> soft academic sand

shrug.


If you replace 'academic' with the secondary definition: "not of practical relevance; of only theoretical interest." it is probably true though. Having known some of the C++ standard contributors, they strongly defend themselves against the "not of practical relevance" part with "look what I wrote". Sure it's clever but adding language features just to say "look what I wrote, it's clever is no excuse for building a language that's become a train wreck.

(I have been coding in C++ on and off professionally since 1985 and I do like some of the C++11 and c++14 features. The pointer improvements are great but the template stuff is a complete joke on us).


Sure it's clever but adding language features just to say "look what I wrote, it's clever is no excuse for building a language that's become a train wreck.

Actually, the rationale behind the language features you're criticizing is that people in the real world were already using some techniques in C++ in a needlessly complex and convoluted way, and these new additions not only simplify these implementations but also allow the compilers to output helpful, user-friendlier messages.

Take concepts, for example. You may not like template metaprogramming, but like it or not they are used extensively in the real-world, in the very least in the form of STL and Eigen. Template metaprogramming is a central feature of C++ consumed by practically each and every single C++ developer, in spite of rarely producing code them. Does it make any sense at all to criticize work to improve a key feature that benefits each and every C++ programmer, in spite of not having to write code with it?

And no one of sane mind would argue in favour of shoehorning #include and #ifndef/#define in detriment to a proper module system.

Just because you aren't familiar or well-versed with some C++ features, or aware of how extensively they are used, it doesn't mean they are not used or that the stuff you don't know automatically qualifies as a trainwreck.


Used have used C++ templates every day of the week for at least 15 years.

If you really did any serious work writing template metaprogramming code, or were aware of what happens under the hood with libraries that were developed with it, you wouldn't be criticizing recent contributions to improve it's UX, for both developers and library/module consumers, as a trainwreck.

> When I look at C++14 and later I can't help but throw my hands up,

Why C++14? The changes were very minor and mostly about being able to declare lambda functions with auto, which is extremely useful.

> Waiting for Cross-Platform standardized SIMD vector datatypes

I only know of ISPC having this, but there are also lots of SIMD libraries for C++ that are small and have minimal dependencies.

> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

std::thread, atomics, and mutexes were added in C++11 and work extremely well. OpenMP is in the top four compilers if someone wants super easy fork-join parallelism. What other languages make C++ look archaic here?

> Debugging cross-platform code using couts, cerrs and printfs

Both visual studio and Qt Creator have made this unnecessary for a long time (if you can do step through debugging). What other language are you thinking of that makes C++ look archaic here?

> Forced to use boost for even quite elementary operations on std::strings.

That's completely ridiculous. It is easy to avoid boost these days (thank god). This is DEFINITELY not worth using boost for. First you can use https://github.com/imageworks/pystring on top of what C++ already has combined with regular expressions.

I don't think anything you listed is actually a problem. If you had talked about not having a standard networking library or standard serialization it might have made more sense.


Just my 2 cents from personal experience.

> a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility

I use a lot of library features after C++11. Variant, span, and string_view are the most important ones. As to language features, structured bindings and variable templates come to mind. They pretty much hit all of your code quality points. I don't think these are for "a small circle of language academics" either (I'm definitely not in that "small circle"). Syntax-wise, meta programming can get ugly yes. Even Stroustrup himself doesn't like it. I guess at this point it's just for "historical reasons".

> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

I think this one comes down to that there are a vast range of parallel computing models out there, and C++ wants to have generality. I used to write a lot of MPI programs targeting the super computers. I don’t think any language would want to include that in the standard…

> Debugging cross-platform code using couts, cerrs and printfs

What’s wrong with printing? I even debug JavaScript programs with console.log(). It’s convenient.

If you just do local dev, debuggers work pretty well, you can debug however you want. I was unfortunate enough to have pretty much always worked on platforms that is hard to have a good remote debugging session, due to hardware capacity, legacy toolchain, or even ssh-ing onto the host being hard enough due to security. But that's hardly C++'s fault.

> Forced to use boost for even quite elementary operations on std::strings

It’d be great if std::string has more features. But I don’t think it a big deal. Personally I don’t like linking boost to my programs, so I just write my own libraries for that. It’s just elementary operations anyway.


But that's the point. Metaprogramming has gotten significantly better since c++11, and c++17 metaprogramming is extremely clean. Are we getting mad at them for improving things?

I don't think it's really as bad as this...

> Waiting for Cross-Platform standardized SIMD vector datatypes

We sort of have this? compiler loop vectorization is, effectively, this. Granted it's not standardized.

> nonstandard extensions ... to run computations in parallel on many cores

std::thread?

> Debugging cross-platform code using couts, cerrs and printfs

True, but i think this is not the language's fault. C/C++ debugging tools are great (the best?).

Debugging any sort of meta-programming is a mess; would definitely agree w/ that. hopefully concepts will help.

> Forced to use boost for even quite elementary operations on std::strings.

This one i agree with.


IMO, you need to raise your expectations.

> > Waiting for Cross-Platform standardized SIMD vector datatypes

> We sort of have this? compiler loop vectorization is, effectively, this. Granted it's not standardized.

Auto-vectorization isn't remotely capable of what an engineer is capable of doing through intrinsics.

> > nonstandard extensions ... to run computations in parallel on many cores

> std::thread?

Compare that to the capabilities provided by Thread Building Blocks.


TBB is a great C++ library, but it seems like the existence of great C++ libraries should a point in favour of C++, not a point against it.

So... You're arguing against it by pointing out an excellent library for the language? Was someone forcing you to use std::thread? Of course it won't have as many features as tbb; it's meant to help pthreads users.

Not exactly. I am reminded of n3557. The ability to write a library like TBB is a positive. But much richer libraries are just barely over the ridge. std::thread is not much more interesting than the abstractions provided by Boost in the early 00's.

http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2013/n355...


Part of the job of the library is to be boring. It's the reason third party libraries exist in the first place. They give you the basic starting points to get the job done.

Look how many people are complaining on here about how complicated c++ is. If something like tbb was integrated they would be all over it.


> std::thread?

They probably meant something more declarative, along the lines of OpenMP.


https://en.cppreference.com/w/cpp/algorithm/execution_policy...

Most STL algorithms can be executed in parallel from C++17.


so... openmp?

It is standardized and widely implemented, just not part of the C++ standard itself.


> Waiting for Cross-Platform standardized SIMD vector datatypes

I’m not waiting for anything, I’m writing non-cross platform ones for years now http://const.me/articles/simd/simd.pdf http://const.me/articles/simd/NEON.pdf

These things are specific to CPU architectures, but other then that they’re cross-platform and de-facto standards set by Intel and ARM. Same source code builds with all mainstream compilers, regardless on the target OS.

> nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores

OpenMP is not part of C++ standard, but it’s still a standard in the sense they have a complete specification: https://www.openmp.org/specifications/ Mainstream compilers are reasonably good at implementing these specs.

> Debugging cross-platform code using couts, cerrs and printfs

Debugging story is not great outside MSVC, but it’s not terrible either. When I needed that, gdb worked OK for me.

> Forced to use boost for even quite elementary operations on std::strings

I agree the ergonomics can be better, but I’m not using boost, and I see improvements, e.g. std::string_view in C++/17 helped.


I'm not sure cross-platform SIMD vector data types are practical, at least not ones that don't force you to understand the implementation details on every microarchitecture you target.

If you actually care about performance, and presumably anyone that wants to use SIMD vector types does, you need to fit the higher-level data structures to the nuances of the microarchitecture you are targeting. Compilers don't do optimization at that level, you have to write the code yourself. Thin wrappers on compiler intrinsics is actually the right level of abstraction if you want to exploit those capabilities.

Similarly, how code is parallelized is completely dependent on what you are trying to do, the software architecture, and the silicon microarchitecture; there is no way to usefully standardize it outside of use cases so narrow they probably don't belong in C++. Parallelization in practice happens at a higher level of abstraction than the programming language.

And FWIW, I use many of these new C++ language features in real software every day because they provide immediate and compelling value. I am not an academic.


Code quality can also be judged by the quality of compiler output. C++ has many language features that allow compilers to generate efficient code. Unfortunately it also features incredibly complex abstractions that lead to insane binary interfaces.

Binary interface complexity is actually a huge reason why people rewrite stuff in C. When you write in C, you get symbols and simple calling conventions. Makes it easy to interoperate.


To some degree you can write in C++ and expose C interface.

> C++ has many language features that allow compilers to generate efficient code.

It does, but it also has the ability of generating inefficient code. Sure, it's often the developers fault but I feel like it's much easier to shoot yourself in the foot in terms of performance in C++ compared to other compiled languages.

Some real-life examples for me:

* Missing a '&' for a function parameter resulting in that object being copied for each function invocation

* Adding a couple extra chars to an error message string in an inlined function which caused that function to then be 'too large' to inline according to the compiler


Allow me to be the devil's advocate.

> When I look at C++14 and later I can't help but throw my hands up, laugh and think who, except for a small circle of language academics, actually believes that all this new template crap syntax actually helps developers?

I do. There are a lot of features introduced since C++11 that make my life much easier. Sure, it's always scary to have to learn new things, but once you get over that hump, you start to see the benefits. Concepts and constexpr cut down on the template boilerplate crap a lot. Being able to use the auto keyword in more contexts means less repetition. Modules get rid of the ugly hack that is the preprocessor. std::span means I don't constantly have to pass around a pointer and length, or create a dedicated struct to encapsulate pointer+length. Sure, there are some more obscure features whose usefulness are questionable, but for a design-by-committee language, they're doing a slow but sure job of moving past the language's old warts.

> In reality what's going to happen is that inside every organization a group of developers with good intentions, a lack of experience and too much time will learn it all and then feel the urge to now "put their new knowledge to improve the codebase", which generally just puts everyone else in pain and accomplishes exactly nothing.

Feature adoption doesn't happen overnight. Remember, we're talking about a decades-old language burdened by backwards compatibility - it took a long time for people to migrate from supporting C++03 to dropping it in favor of C++11. Give it five or ten years, and I reckon you'll see people make use of C++17 and C++20 in much greater numbers.

> Waiting for Cross-Platform standardized SIMD vector datatypes

No argument there. That said, all mainstream compilers already have "immintrin.h" for x64 and "arm_neon.h" for ARM, and using them isn't particularly difficult.

> Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

Are you aware that std::thread has existed since C++11, and std::jthread and coroutines are in C++20?

> Debugging cross-platform code using couts, cerrs and printfs

This is a programmer problem, not a language problem. gdb exists, lldb exists, the Visual Studio debugger exists, and they're not particularly hard to pick up and use - if you're still using print statements to figure out why your application is crashing, that's on you.

> Forced to use boost for even quite elementary operations on std::strings

std::string is an RAII-managed bag of bytes. What kind of operations are you looking for? Stuff like concatenation and replacement can already be done in C++11 with std::string and std::regex. If you want to do lexical operations, like case conversion or glyph counting, then an encoding-aware library is a better solution.


On top of that, one can use strings as a normal "sequenced container of characters" and just use <algorithm>s on them. This is one of my favorite ways to write concise code in those interview questions (e.g. "That's just a rotate, then a partition").

Best answer, much appreciated.

> Give it five or ten years, and I reckon you'll see people make use of C++17 and C++20 in much greater numbers.

Frankly, the thought of it makes me want to migrate to Rust.

> Are you aware that std::thread has existed since C++11, and std::jthread and coroutines are in C++20?

Sure, but very low level. I'd be great to have a standard for something like TBB or OpenMP.

> std::string is an RAII-managed bag of bytes. What kind of operations are you looking for?

Looking enviously at Javascript strings and boost string algorithms...


> Sure, but very low level. I'd be great to have a standard for something like TBB or OpenMP.

The answer here is modules. Improve the story on shipping C++ libraries, and then who cares if it's in the "standard library" or not? It's not like anyone in JS land for example cares if something is native to the language or in a library since adding a library is trivial & easy.


Modules have nothing to do with shipping libraries (or dependency management), they are purely about encapsulation of interface and (API) implementation.

It should be std::string’s job to store strings. If people want to perform operations on them, that’s what free functions are for, right? Nobody wants std::string to have hundreds of methods.

Tbb is great.


I see as concepts as being much cleaner way of enforcing a contract, particularly since SFINAE can make the error quite distant from the problem.

> Personally I judge code quality by a) Functionality (does it work, is it safe?), b) Readability c) Conciseness d) Performance and e) Extendibility, in this order, and I don't see how these new features in reality help move any of these meaningfully in the right direction.

I don't understand... How do these features not address those points?

> a) Functionality (does it work, is it safe?)

constinit, consteval and all the remaining constexpr improvements are a massive step for ensuring the "compile-time-ness" of code:

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...

There's several more sharp edges being removed too. It's of course not going to tackle the fundamental safety concerns the way Rust is doing, but that would be a new language (like Rust is) anyway.

> b) Readability

requires is infinitely more readable than the SFINAE we had to write so far:

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...

Besides that elephant in the room, most of these changes involve making the code either simpler to read/write (too many to name) or more explicit (consteval/constinit, attributes, ).

> c) Conciseness

Half the features contribute to this in one way or another (e.g. see previous point, or the spaceship operator), but there's also a whole list of syntactic sugar being added:

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...

> d) Performance

What about coroutines?

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...

> e) Extendibility

Various fixes to customization points:

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...


Regarding "Readability", you mentioned requires (which is great), but ranges and views also make everything so nicer to read!

True, C++20 also comes with great library additions that are not the subject of this blog post but affect (possibly to an even greater degree) the code attributes in question.

It's definitely tricky. I think if you just stick to modern C++ and avoid anything advanced unless necessitated, it's a big improvement on your code. But as we know, developers with the discipline not to take advantage of every feature available to be "clever" is rare. And I agree, the standard library is still very much lacking. This is one thing I really like about working with C#, the vast majority of what I'm doing is available and simplified through the standard library.

>"They should have just stopped extending it after C++11."

For the uniformed such as myself, what happened beginning at C++14? What exactly was the fundamental shift?


It is not really about the language at all. He got older, and does not want to learn new things. Other people who stopped learning earlier say, "better C" instead.

The language has gotten continuously more powerful since 2011, albeit in smaller increments until C++20 when several big features landed.

Good C++11 looks practically nothing like C++98, and good C++20 looks as little like C++11.

It is really getting more fun all the time, as old crud falls away, and you can just say more and more just what you mean. Improved type-inference capabilities are doing a great deal of the heavy lifting.


> - Waiting for Cross-Platform standardized SIMD vector datatypes

> - Using nonstandard extensions, libraries or home-baked solutions to run computations in parallel on many cores or on different processors than the CPU

SIMD computation and multithreaded parallel computations were largely solved with execution policies. C++17 added multithreaded and multithreaded+SIMD execution policies, C++20 added single threaded SIMD execution policy.

I would argue that standardizing SIMD vector extension datatypes is an anti-feature for all cross platform programming languages. Writing AVX512 code is very different from writing NEON code. If the compiler autovectorizer doesn't generate good enough code for you, you have no choice but to use the non-cross platform vendor specific intrinsics anyway. If a SIMD datatype and the operations you could perform on it were standardized, it would necessarily have to be a very low common denominator. I don't even know what the lowest common denominator between MMX, SSE2, AVX2, AVX512, NEON and Altivec (to name a few) even is.

Note that the autovectorizers in GCC and Clang (not MSVC) are very, very good. If you structure your data in the way it would have to be structured if one were going to write hand-vectorized code anyway, GCC and Clang will, with a high probability, vectorize it correctly.

I don't know what a standardized language feature for execution on different processors than the CPU would even look like. What languages have this, and what does it even look like? Can you give a code sample?

> - Debugging cross-platform code using couts, cerrs and printfs

I don't think I understand what you're suggesting. On second thought I definitely don't understand what you're suggesting.

Are you suggesting that the C++ standards committee should standardize a _debugger_? You'd have to standardize the ABI first. There's no way to do that; 32 bit x86 with its 8 registers must necessarily have different calling conventions than ARM with its 32 (I think? it's been a while) registers.

If you're suggesting that the committee standardizes a UI, there's no way you're going to get the Visual Studio team and the GDB team to agree on what a debugger ought to look like. I don't even know where a mediator would even begin to start suggesting anything.

If you're suggesting that current debugger offerings such as the Visual Studio debugger and GDB aren't good enough, I dunno what to tell you. They work for me.

> - Forced to use boost for even quite elementary operations on std::strings.

Can you give an example? The big thing I used boost string stuff for was boost::format, but now that there's std::format I don't need that anymore.


C++ is a broken mess and I'm completely fine with that because it couldn't be any other way. It started as C with classes and they've kept it moving into the 21st century. Rust is here now and should be used for new projects, but at least old projects get to use these new features, ugly as they are. I've also noticed that most people complaining about "new" features do not understand them.

The #1 feature I currently want is the ability to do an implicit lambda capture of a structured binding, at least by reference. I appreciate there are interesting corner cases of like, bindings to bitfields: I simply don't need those corner cases solved... if it just supported a handful of the most obvious cases I would be so so so happy, and then they can spend the next decade arguing about how to solve it 99% (which I say as we know it won't be 100%... this is C++, where everything is some ridiculous epicycle over a previous failed feature :/).

(edit:) OMG, I found this feature in the list!! (It was in the set of structured bindings changes instead of with the changes to lambda expressions, which I had immediately clanged through.) I need to figure out now what version of clang I need to use it (later edit: I don't think clang has it yet; but maybe soon?)... this is seriously going to change my life.

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...


This thankfully made it into C++20.

However a full destructuring bind, à la Lisp, hasn't. You can't do `for (auto& [a, [b, c]] : some_container_of_structs)` which is handy for taking apart all sorts of things.

Relatedly there's no "ignore" though it exists in function declaration syntax: you can write `void foo (char the_char, int, long a_long);`. But you can't ignore the parts of a destructure you don't need: `auto& [a, , c]`. This capability is sometimes useful in the function declaration case but is quite handy, say, when a function returns multiple values but you only need one (consider error code and explanation).

And variadic destructuring...well I could go on.

I haven't attended a C++ committee meeting in 25 years (and didn't do a lot when I did) so I have no reason to complain.


Destructuring that lets you ignore parts of the object is usually found in the form of pattern matching.

Lisp destructuring comes directly from macros: CL's destructuring lambda lists and macro lambda lists are closely related cousins.

Macros usually care about all their arguments. Reason being, they are designed to cater to those arguments; an unnecessary element in the syntax of a macro will just be left out from its design, rather than incorporated as a piece of structure that gets ignored. (The exceptions to it are reasonably rare that it's acceptable to just capture a variable here and there and ignore it.)


Yeah: 100% to these complaints; I do run into the full destructuring issue occasionally, but it isn't blocking me ability to do composition of features in the same way this lambda capture issue is ;P.

One day we will get it. I believe the intention is to support full destructuring but it is hard to get a feature added to the standard. Sometimes functionality is cut just to increase the probability that it will be voted in.

For example lambdas were added in C++11, but generic lambdas were cut out and only added in C++14.


What grumby refers to seems trivial to implement.

"If you don't see anything between the commas, then fill it in with a compiler-generated symbol."


There is a proposal for using _ or __ as a throwaway reusable identifier. Not sure what's the status though.

>> this is seriously going to change my life.

Now I'm curious. Can you give a small code example of the kind of thing this solves and how it will change your life? ;-)


I constantly use both lambdas and structured bindings; without this feature, I am having to constantly redeclare every single not-a-variable I use in every lambda level and then maintain these lists every time I add (or remove, due to warnings I get) a usage. Here is one of my lambdas:

nest_.Hatch([&, &commit = commit, &issued = issued, &nonce = nonce, &v = v, &r = r, &s = s, &amount = amount, &ratio = ratio, &start = start, &range = range, &funder = funder, &recipient = recipient, &reveal = reveal, &winner = winner]() noexcept { return [=]() noexcept -> task<void> { try {

And like, at least there I am able to redeclare them in a "natural" way... I also tend to hide lambdas inside of macros to let me build new scope constructs, and if a structured binding happens to float across one of those boundaries I am just screwed and have to declare adapter references in the enclosing scope (which is the same number of name repetitions, but I can't reuse the original name and it uses more boilerplate).


Ah I see, yes that's horrible.

It's kind of weird structured bindings where not captured with [=](){} before, actually. I'm still stuck at C++11 for most of my work so I cannot use structured bindings at all, but I would not have expected to have to write that kind of monstrosity in C++17


Out of curiosity, what kind of domain is this?

I work on a "streaming" probabilistic nanopayments system that is used for (initially) multihop VPN-like service from randomly selected providers; it is called Orchid.

https://github.com/OrchidTechnologies/orchid


How did you fall into such a niche? I don't mean that as a pejorative. It just seems so specific. And esoteric to me.

It's a VPN Service that uses cryptocurrency as a means of payment.

What seems really esoteric to me is that the 'Orchid' Ethereum Token has a $737,057,000.00 fully diluted market cap, which I'm struggling to understand: https://etherscan.io/token/0x4575f41308ec1483f3d399aa9a2826d...


I dunno... Brian Fox (the developer of bash) got involved, and he tapped me (someone he has worked with before) as a combination networking and security expert? FWIW, if you describe anything with the technical precision I just did, almost anything will sound "esoteric" ;P.

I believe GCC has supported this for a while now, even before it was added to the list of features for C++20.

Yeah... I did know gcc allowed it, but I didn't know it was because the spec now allowed it and not that they were just doing it anyway. Sadly, I am heavily heavily using coroutines (--even coroutine lambdas... with captures structured bindings ;P (don't try to auto template them though: that crashes the compiler)--which clang has much better support for.

I hope one day we can get a widely adopted C and C++ package manager. The friction involved in acquiring and using dependencies with odd build systems, etc. is one of the things I dislike about the language. I’m aware on Linux things are a bit easier, but if it were as easy as “npm install skia”, etc. everywhere, I think many people would use the language more. Rust has package management, but not the ecosystem yet. On the other hand, C/C++ has the ecosystem, but no standard way to easily draw from it.

Widely adopted source code manager requires a widely adopted build system. CMake is certainly a contender but the ecosystem is too fragmented even then & you have to do a lot to try to link disparate build systems together. Also C++ is a transitive dependency hell nightmare & any attempt to solve that (like Rust has) would break every ABI out there. Given how bumpy such breakages have been in the past, I don't think there's any compiler maintainer eager for it (even MSVC has decided to largely ossify their STL runtime ABI).

Conan is certainly a laudable attempt at something like this. Without access to their metrics though, it's hard to tell if they're continuing to gain meaningful traction or if their growth curve has plateaued. It's certainly not in use in any project at medium to bigger size companies I've worked at. By comparison, Cocoapods was pretty successful in the iOS ecosystem precisely because Xcode was the de facto build/project system.


I'm a longtime CMake user, but I think even within the CMake world, the solution is quite a bit more complicated than just "everything needs to be CMake", with a lot of hassles that arise when multiple generations of the tooling is involved, when you're trying to pass down transitive dependencies, when package X has a bunch of custom find modules with magic to try to locate system versions of dependencies but silently fall back to vendored ones.

The higher up the stack you get, the worse and worse these problems get, with high-level packages like Tensorflow being completely intractable:

https://github.com/tensorflow/tensorflow/tree/master/tensorf...


Yup. 100% agree. I totally overlooked the shitshow you'll have managing the different versions of CMake a build might require. Somehow Bazel manages to escape that mess. I think that might be a better foundation, but getting everyone to port to that... it's a tall ask & there's many vocal people who are against improving the build system they work with (hell, I've met many engineers who grumble and strongly prefer Makefiles).

I'm obviously pretty biased having come from 10 years of doing infrastructure in the ROS world, but having spent a lot of time integrating this or that random library into a singular product-versioned build, I do quite like the approach of colcon:

https://github.com/colcon

Basically it has plugins to discover/build various package types (autotools, cmake, bazel, setuptools, cargo), and the "interface" between packages is just the output of whatever the standard install target is for a given package. This makes it totally transparent whether your dependency is built-from-source in your workspace, coming from /usr/local via a sudo-make-install workflow, or coming from /usr via a system package.

Under this model, you never pull a dependency as a "subproject" with invocations like include or add_subdirectory; it's always using a standard find_package invocation, where basically the only requirement on participating packages is that they cooperate with long-existing standards like CMAKE_PREFIX_PATH and CMAKE_INSTALL_PREFIX. Vendoring a library is then not making a copy of it in your project tree, but rather as sibling project within the shared workspace that colcon builds.


> Without access to their metrics though, it's hard to tell if they're continuing to gain meaningful traction or if their growth curve has plateaued.

Some public data that could be used as proxy for traction:

- Some companies using Conan in production can be seen in the committee for Conan 2.0 called the tribe: https://conan.io/tribe.html. That includes companies like Nasa, Bose, TomTom, Apple, Bosch, Continental, Ansys...

- The public repo for ConanCenter packages, got aprox +3500 pull requests in last year https://github.com/conan-io/conan-center-index/pulls. This doesn't count for contribution to the tool itself.

- https://isocpp.org/files/papers/CppDevSurvey-2020-04-summary... shows a 15% of adoption

- With +1600 subscribers the #conan channel in the CppLang slack is consistently ranked in the most active channels every month: https://cpplang.slack.com/stats#channels


"I think many people would use the language more"

C++ is considered the industry leading language in many fields. I'm not sure how many more you would want (given that those fields that don't use C++ ARE probably better served with some other language).

I agree the build is painfull, but large orgs have for this reason specifically implemented build systems using nugets, conan/cmake or whatnot.

In personal projects I just download the prebuilt binaries of component libraries and drag and drop them to visual studio, minimizing hassle.

If you discard finesse and scalability as requirements you can actually jury rig a C++ project in a jiffy. You just need to let go of the idea that it must be "industry standard setup".


C++ used to be the industry leading language in many more fields, but it lost ground to other languages. Not a bad thing--"know thyself" and all that. But Rust seems like a credible threat to C++'s remaining niches (bury your head in the sand if you want), and C++ will need to evolve if it is to not lose further market-/mindshare. And it is evolving, as this article points out, but a huge glaring pain point in C++ development remains the build and package management tooling. The aforementioned build systems that large organizations operate aren't nearly as nice as, say, Cargo and I think a lot of greenfield projects who have to choose between cobbling together their own build tool to work with C++ and using Rust + Cargo off the shelf will choose the latter (other factors notwithstanding).

I will get worried when NVidia releases CUDA-Rust, and changes their GPGPUs from C++ memory model to Rust, Microsoft decides to rewrite WinUI in Rust, Apple moves Metal from C++ into Rust, or Unreal/Unity get rewritten in Rust.

Like I said:

> bury your head in the sand if you want

Great chat, as always. :)


Rust is so far from taking over c++, you could bury your head, my head, and everyone else's head in the sand and we'd still be waiting.

You write as if Rust vs. C++ was some sort of competition.

I don't understand this - they are not competing brands or sports teams but tools.

Why would it matter and to whom if C++ use would decline?

If use of C++ declines then I don't understand how that would make the language a lesser tool.

Choose the best tool for the job and all that.


> You write as if Rust vs. C++ was some sort of competition.

Competition exists all around us, all the time, whether we like it or not.

And the competition between C++ and Rust is very clear. I, for example, would likely be spending more time / effort on learning the latest C++ standards if Rust didn't exist. And likely hate my life a little bit, unless I could exclusively stick to C++ "the good parts" if such a subset exists and I didn't need to interface as much with existing C++ code.


My job is writing C++ and I love it. I've been working with C++ since the mid 90's and have grown very fond of the language and I'm pretty productive in it.

If C++ use declines, then there are fewer opportunities for me. So you can count me as a member of team C++.


I also write C++ for a living but feel no threat if I had to suddenly start writing C, C#, Java, Python, Rust, F#, Scala, or what have you. Sure, it would need learning a thing or two but basically they all are driving the same ARM or x86 based compute stack with exactly the same constraints due to computer architecture .

My focus has been to brand myself as "domain expert" in few algorithmic domains rather than "C++" expert so this may affect my point of view, though.


> feel no threat if I had to suddenly start writing C, C#, Java, Python, Rust, F#, Scala

It isn't necessarily a threat. I'm pretty comfortable in a bunch of different languages but just enjoy C++ more than the others. To use a car analogy, I have no problem driving automatic but I really like driving stick.


If you don't keep up with all this C++20 material, there will likewise be fewer opportunities for you. They will throw it at you in an interview, to check that you aren't some stubborn C++98 gunslinger.

I do my best! I really enjoy working in C++ and it's a pretty exciting time for me when a new standard rolls around and my compiler gets updated to support it.

How long will that take you to assimilate those C++20 features?

Years. Some things are obviously useful right away and other things take me a lot longer to grok. For example, rvalue references have been around for a long time now and I still have to slow down when I see && in code.

Learning new features is not hard; putting these in practice usually depends solely on their support by the compiler.

> If use of C++ declines then I don't understand how that would make the language a lesser tool.

The hypothesis is that C++ will decline because it becomes the lesser tool (where "lesser tool" means it excels only in increasingly small niches) if it doesn't adapt. That said, the C++ community seems to want to adapt and remain relevant, as indicated by its significant progress over the last decade.

As for why someone might care about the usage of a programming language: because "ease of finding developers" and "quality and breadth of ecosystem" are major factors in deciding on new projects. I.e., "the best tool for the job" is often the one with the broader ecosystem and more developers, all else equal. So these factors feed back on each other.


> Why would it matter and to whom if C++ use would decline?

If the use declines to zero, then all the effort someone put into developing C++ compilers and related tooling had been for naught, as are C++ development skills.


Given the prevalence of C++ it is very hard for me to imagine a situation where the use would decline to zero.

The notion of how to write a specific language such as C++ is immaterial compared to the capability to design and implement complex software systems and those skills are quite portable between languages. The best employers tend to recognize this.


>I hope one day we can get a widely adopted C and C++ package manager. [...] , but if it were as easy as “npm install skia”, etc. everywhere,

It's not just the package manager (the command line tool) ... it's the canonical website source that the tool pulls from.

C++ probably won't have a package manager with the same breadth of newer language ecosystems like npm/Nodejs and crates.io/Rust because for 20+ years C++ was developed by fragmented independent communities before a canonical repo website funded by a corporation or non-profit was created. There is no C++ institution or entity with industry-wide influence that's analogous to Joyent (Nodejs & npm) or Mozilla (Crates.io & cargo)

I wrote 2 previous linked comments about this different timeline: https://news.ycombinator.com/item?id=24846012

Tldr, 2 opposite timelines happened:

- C++ for 20+ years of isolated and fragmented development groups creates legacy codebases --> then decades later try to create package manager (vcpkg? Conan? cppget?) that tries to attracts those disparate groups --> thus "herding cats" is an uphill challenge

- npm and crates.io exist at the beginning of language adoption allowing the ecosystem to grow around those package tools and view them as canonical


Go has a perfectly good package manager that works with sources hosted on GitHub and other sites -- there isn't any centralized place for people to publish sources, unlike the other package managers you mentioned.

Go's package manager also came years after the language became widely used, and it is now very widely adopted according to the most recent survey[0].

I think C++ could have a good, unified package management story. It would just require the major stakeholders to all care enough to make it happen, which seems to be the missing piece here.

[0]: https://blog.golang.org/survey2020-results#TOC_8.


Go has a small dedicated team that develops and designs the language. They take some input from the broader community but are still the one who decides how things evolve. They decided at some point that go modules was the way to go and everybody followed, because they are the authority who decides how Go evolves.

C++ does not have an equivalent, it's completely decentralized which results in more messy situation. As a result you have an open market where different people try to build different tools and approaches for their own problems, then try to get others to use them (similar to what Go had before go modules, we had lot of package managers to chose from at the time).

Instead of a top down decision it's a negotiation between the various actors. But the last thing we need is for the C++ standards committee to standardize a package manager. That would take forever to do, would result in a messy tool that tries to compromise with all the actors in some ways, make it very hard and slow to evolve over time and would likely result in a lot of pain, etc.


> C++ does not have an equivalent, it's completely decentralized which results in more messy situation.

That is the role of the ISO C++ committee, is it not? They are the major stakeholders. They would just have to care enough. They cared enough to release C++20, didn't they? It's not like they never get anything done, which seems to be the implication a lot of people make in this discussion.

> Instead of a top down decision it's a negotiation between the various actors.

My understanding is that the various committee members represent the disparate interests of the broader C++ community. I agree it would be very much like a negotiation.

That doesn't mean that it can't be done. This whole thread is discussing things that have been done by the C++ committee: C++20.

> But the last thing we need is for the C++ standards committee to standardize a package manager. That would take forever to do, would result in a messy tool that tries to compromise with all the actors in some ways, make it very hard and slow to evolve over time and would likely result in a lot of pain, etc.

You just summarized my feelings about C++ in general. I would much rather people use Rust or Go or any number of other languages instead of C++, depending on project needs. Such opinions are rarely taken well in threads like this, though, so...

I've been trying to be optimistic and point out that C++ could get package management. If the C++ committee process works well, then the package manager should also end up turning out well.

I'll leave the reader to decide how well they think the long term direction and guidance of the C++ standard has been going and apply that to their feelings of a hypothetical future package manager.


It's design by agreement vs design by decision. In the former you need people to agree. In a committee setting it means that not only do you have to make MSFT, Google, & Apple happy, it's also the various other people that happen to be part of that standard body (the group is large). You definitely pull from a larger group of experts, but it's mired in indecision hell & compromise. Often times a decision that solves 90% of problems is better than a decision that is perfect, but the way ISO is set up, decisions kind of have to be perfect.

That being said, the C++ standards body (at least under Herb?) has done a decent job modernizing their process to fight some of the gravitational issues they were having. They've formalized deprecation rules & tried to get over disagreements. The design by committee issues haven't gone away though - the mess with coroutines, modules, & concepts is a great example of that. The ISO process of language papers precludes even simple additions to the STL where you not only have to navigate standardese, but also manage the review process (that's why you have to find a champion on the standards body to help guide your review through the rigamarole).

My experience contributing to the Rust standard library by comparison was much easier - put up a drive-by diff adding a new (admittedly minor) API, some minor review comments, done & shipped. The whole process took 1-2 weeks, no standardese, no arguing with a large committee on the exact wording, etc.


> My experience contributing to the Rust standard library by comparison was much easier - put up a drive-by diff adding a new (admittedly minor) API, some minor review comments, done & shipped. The whole process took 1-2 weeks, no standardese, no arguing with a large committee on the exact wording, etc.

this is off course all great until two people working in different parts of the language do things slightly differently. Either works alone, but the whole of the language is inconsistent and hard to learn.

C++ has enough inconsistent parts already and so tries to be careful to make new things consistent with itself as best as possible. Even that has failed despite all the review of people looking for places to make things consistent. It is a hard problem to design a large language.


So, a key difference here is that in Rust, the standard library and the language designers are two different teams, with two different standards. The parent is talking about the standard library; there's a reasonably low barrier to entry to add something, but it is added unstably, and the bar to getting it to stabilize is higher. The language does not accept additions by "drive-by PR", the barrier to getting something to land, even in an unstable way, is much, much higher.

The whole language team has to sign off on these stabilizations and language additions, which is what keeps up that consistency you're talking about.


Yeah sorry. Should have been clear that I was talking about the standard library. I've got the chops to contribute standard library code - would never even think about trying to tackle implementing language changes. I don't have the time nor energy to deal with C++ standardese since the spec is an ancillary artifact describing the thing rather than the thing itself (the thing itself being the implementation & documentation).

Granted, this isn't necessarily everyone's experience in std as the change I implemented is well-worn/adopted by any condition variable implementation. Something more controversial/exploratory may have been pushed off into a crate first. I'm still impressed that it only took ~2 weeks to get [1] reviewed & into unstable. I wasn't even involved in the stabilization work/cosmetic renaming that it took to close out [2] which was driven by a community ask & the std maintainers doing a pass to make things consistent. Rust's velocity seems to be that they can deliver changes a full 1 year faster than C++ can (& likely faster if the community really asks for it). In my book it's largely owing to having 1 compiler & 1 standard library & the latter having a much more streamlined RFC process.

[1] https://github.com/rust-lang/rust/pull/67076 [2] https://github.com/rust-lang/rust/issues/47960


No sorry needed! I think you were clear that you were, just worth re-iterating that in Rust, these are two separate groups with similar but slightly different processes, and in C++, the standard contains both language + standard library. (Obviously C++ has working groups... point is that the two languages are similar, but different.)

Hazard of being raised Canadian :). Sorry.

Unlike C++, Rust has pretty strong conventions around formatting and naming, and these conventions are followed in almost all major libraries. Furthermore, most such small APIs tend to bake in nightly for a while before they are stabilized, and so get two rounds of review: once during the initial commit, and once during the push for stabilization.

Do the Go sources on GitHub use waf, make, CMake, bazel, or something entirely bespoke? Or is a common build system assumed?

Go just uses the Go build system, which is common to all Go projects, so... it assumes a common build system.

Some people do occasionally use Bazel or other build systems on top of the Go build system for complicated monorepos.


Right. C++ has a chicken-and-the-egg problem in that neither its build not packaging ecosystem have even de facto standards. GitHub URLs don't solve either problem.

I think everyone agrees that a common build system is a necessary step if any of this is going to work.

Thinking about how much or how little would be required beyond a common build system in order to get a working package management system is still a valid thing to do.


I expect it is necessary to keep build configuration portable with respect to packaging and environment configuration. In other words, I expect downloading from URLs in Makefiles and CMakeLists to be a local maximum.

Keeping a parallel set of instructions or metadata that includes specific URLs and such might work, though. As long as you can skip all that when a system package, filesystem path, or git submodule is more appropriate.


>Go's package manager also came years after the language became widely used, and it is now very widely adopted according to the most recent survey[0].

Are you talking about "pkg.go.dev" and the "go get" command? Isn't there some path dependence in the history of events that's not comparable to C++? Consider:

- Go language: created by Google Inc

- "go get" syntax for package download designed and created by Google Inc

- "pkg.go.dev" funded by Google Inc and highlighted on "golang.org" website that's also run by Google Inc.

There is no business entity or institution in the C++ world that's analogous to Google's influence for Go + golang.org + "go get" + pkg.go.dev.

>It would just require the major stakeholders to all _care_ enough to make it happen,

But it's easier to care if there was an influential C++ behemoth that captured everyone's mindshare to move the entire ecosystem forward. C++ has no such "industry leader" that dictates (or heavily influences) technical direction from the top down.


> Are you talking about "pkg.go.dev" and the "go get" command?

No. I'm not talking about either of those. Your whole comment is, unfortunately, irrelevant.

pkg.go.dev is not a package repo. It's just a place for documentation to be rendered. It renders documentation from third party hosted code, such as on GitHub or elsewhere.

"go get" predates Go Modules, which is the current package management system. The whole original design of "go get" was to simply download code from somewhere on the internet, and place it in the right spot of the $GOPATH. This has nothing to do with a proper versioned package manager like Go Modules.

AFAIK, "go get" was also never really designed for Google's internal use cases. They use a monorepo that was perfectly content with $GOPATH, and all their code was developed in the monorepo to begin with. There was nothing for them to "go get", except for the rare outside dependency that they were embedding into their monorepo, I would imagine. I've never worked for Google, these are just things I hear about.

Go Modules was also not designed for Google. It was designed for the community, based on findings from community developed package managers for Go. Google has no real use for it — again, they use a monorepo.

Nowadays, "go get" can be used with Go modules, but in practice, it feels like it almost never is. Maybe someone would use that command to upgrade an existing dependency, instead of editing the `go.mod` file to change the version there?

So, your comment just shows that you haven't researched this enough. Yes, Go Modules was still guided by Googlers, who were even more in control of the language direction back then than they are now. Yes, change always causes some drama. But, I'm not really here to explain the history of Go package management...

I'm just saying that C++ could have a nice, distributed package management system, it would just require the major stakeholders to all care and work together on it. The ISO C++ language committee is a finite number of people. They are the major stakeholders, as far as the language direction is concerned.

If they didn't have the power to enact major language changes, we wouldn't be here talking about C++20.

The stakeholders for Go were able to develop a package manager that is distributed (an idea compatible with how all C++ code is scattered across the web these days), and that achieved broad adoption, and this was some years after the language went into wide use.

It’s an extremely relevant analogue for C++ to study, if the committee members wanted a package manager badly enough.

> But it's easier to care if there was an influential C++ behemoth that captured everyone's mindshare to move the entire ecosystem forward. C++ has no such "industry leader" that dictates (or heavily influences) technical direction from the top down.

You edited this in while I was replying, but I agree entirely. Getting the committee to agree to a package management solution would be much more difficult than having a single behemoth guide the decision. Does that mean it is impossible and therefore no one could do it? Everyone here talks like it is impossible, but it doesn't really seem to be.


>Yes, Go Modules was still guided by Googlers, who were even more in control of the language direction back then than they are now. Yes, change always causes some drama. But, I'm not really here to explain the history of Go package management...

>I'm just saying that C++ could have a nice, distributed package management system, it would just require the major stakeholders to all care and work together on it. The ISO C++ language committee is a finite number of people. They are the major stakeholders, as far as the language direction is concerned.

The ISO C++ committee can't learn from the history of Go modules community acceptance because they don't have the same power as Google. You seem to misunderstand what the C++ committee _is_. Yes, they have representatives from Microsoft/Google/Apple/Intel but the org is designed to review proposals from submitted papers. They are more like an ongoing academic conference rather than a devops team that runs websites.

We seem to be discussing 2 different abstractions of making a "package manager". With your emphasis on Modules, you seem to be only focusing on the tool. To repeat my gp comment, I'm also focusing on the canonical package repository (or index, or discovery engine).

Consider the following sentence from https://proxy.golang.org:

>The Go team is providing the following services run by Google: a module mirror for accelerating Go module downloads, an index for discovering new modules, and a global go.sum database for authenticating module content. >As of Go 1.13, the go command by default downloads and authenticates modules using the Go module mirror and Go checksum database.

You misunderstood my cite of "pkg.go.dev" run by Google Inc but this is the part of your survey that I was referring to:

>The package discovery site pkg.go.dev is new to the list this year and was a top resource for 32% of respondents. Respondents who use pkg.go.dev are more likely to agree they are able to quickly find Go packages / libraries they need: 91% for pkg.go.dev users vs. 82% for everyone else.

The ISO C++ committee is not set up to implement a new website to make the above Go-specific paragraphs be a similar reality for C++ with a search & replace "s/Go/C++/g". Think about _who_ funds and provides paid people to actually run the "proxy.golang.org". It's Google Inc. The C++ committee doesn't have an equivalent situation.

Yes, the C++ committee can receive a proposal for new language syntax such as "std::unique" and after some back & forth commentary and debate, they say "approved" and then it's up to each C++ compiler vendor to then go and independently implement it on their own timeline. In contrast, if someone proposes "C++ should have a package manager", exactly _who_ will implement and maintain the canonical repo mirror? This is not independent lines of work that GCC, Clang, Microsoft, and Intel can do on their own. Even if we hypothetically extend the website "isocpp.org" to actually start hosting the canonical C++ repos instead of just blog posts about syntax proposals, _who_ is paying for it? Again, there is no single entity like Joyent/Mozilla/GoogleInc that raises their hand and says, "We'll set it up". I suppose we could imagine that the major players like MS+Google+Apple all contribute to a shared fund to pay for the repo mirror -- and the salaries for devops to remove malicious uploads -- but notice no other major language package manager Javascript/Rust/Go had to do it that way. So we have that friction of coordinating multiple corporations. Even if that website was set up, many existing C++ library writers (that existed for decades before a C++ package manager) wouldn't bother uploading their code to it. So that's another friction. E.g. Conan is supposedly the current winner of C++ package manager mindshare and ffmpeg is not on it.

I think the disagreement is rooted in how we compare ISO C++ committee vs Google Inc. To me, releasing a C++20 language specification* does not say anything about implementing a canonical repo* so that a command line tool magically works the way people expect.

EDIT to reply: >You can have a package manager without having a discovery tool or a central repo.

This means your conversation is focusing on the tool which isn't the abstraction I'm emphasizing.

>Package discovery tools are not very relevant to the discussion.

It's relevant if the particular person wondering "why C++ doesn't have a package manager?!?" uses a mental of model of how npm and cargo work. They don't have to know if it's github vs gitlab vs somewhere else. The tool just works without thinking about the location. That's what a canonical repo as a default convention for the client tool provides.


> This means your conversation is focusing on the tool which isn't the abstraction I'm emphasizing.

Your edit implies that I'm talking about a useless tool that can't do anything in the absence of Google, which simply isn't true.

The Go Modules tooling does not depend on any central resource to work. Google could shut down tomorrow, and nothing would change for existing projects. The go CLI tools would still be able to find, download, and verify the dependencies. I would still be able to add new dependencies, and the tooling would be able to fetch those.

What are you talking about, if not a functional package management system? Google's websites are nice, but they're not required for everything to Just Work.

Anyone in the C++ community could stand those websites up at any time after the package management tooling came into existence. They're not required for the functionality of the package manager.

> It's relevant if the particular person wondering "why C++ doesn't have a package manager?!?" uses a mental of model of how npm and cargo work. They don't have to know if it's github vs gitlab vs somewhere else. The tool just works without thinking about the location. That's what a canonical repo as a default convention for the client tool provides.

Go's CLI tooling literally doesn't provide any way to search for packages at all. You may think it's a requirement, but it's really not! Go requires you to know where the dependency is located, because Go sure doesn't unless you tell it!

It feels like I'm really awful at explaining things.


>Go's CLI tooling literally doesn't provide any way to search for packages at all. You may think it's a requirement, but it's really not!

You're still misunderstanding the level of abstraction I'm emphasizing for what a "package manager" means to many people.

Let's dissect the following command from the Javascript ecosystem:

  npm install react 
Notice that the end user does not need to know whether React is hosted on GitHub or GitLab or Facebook's own servers. He doesn't even have to do a google search. The npm command just "magically" gets the React library.

Exactly _how_ does npm do that? From _where_ does npm fetch? The _how_ & _where_ is what the majority of my comment is about. All your explanations of Go not working that way does not address that mental model at all.

So you have 2 concepts in a "package manager":

(1) npm, the client command line tool

(2) the canonical default repo that npm tool points to -- and it's a virtuous cycle of easy use and trust because almost everybody publishes to it. It has grabbed mindshare.

You keep saying Go doesn't need (2) but I'm saying that doesn't change the fact that many mentally include (2) of what a comprehensive package manager _is_.

And is (2) really that unreasonable? Consider the Go documentation example of "adding a dependency" from https://blog.golang.org/using-go-modules#TOC_3 -- it has this example output:

  go: finding rsc.io/quote v1.5.2
  go: downloading rsc.io/quote v1.5.2
... exactly _where_ is it downloading the "rsc.io/quote" package if the user does not manually specify Github/Gitlab/PrivateEnterpriseRepo ?

> ... exactly _where_ is it getting the "rsc.io/quote" package if the user doesn't manually specify Github/Gitlab/PrivateEnterpriseRepo

It's getting it from https://rsc.io/quote

I don't understand what's confusing about this... it's literally specified in the name of the package.

Russ Cox is hosting his packages at rsc.io, which is a personal domain name he owns. If you visit it with a normal browser, he just kicks you over to pkg.go.dev, because he doesn't want to put in the effort to make a website for your human consumption. He's just hosting some packages there.

I really, really feel like you need to spend some time with Go Modules. You don't really seem to be getting the decentralized nature of it. But it works, and it works well!

In this case, Russ Cox has a meta tag there that tells the Go tool to download it from GitHub: <meta name="go-import" content="rsc.io/quote git https://github.com/rsc/quote">

But there's nothing stopping him from actually exposing a git repo at https://rsc.io/quote, instead of just exposing a redirect.

By telling people to use that package URL, he has the flexibility to change how and where he hosts the package in the future.


Sure, someone would complain. Just because some people would complain about the absence of one feature that you say is impractical to implement doesn't mean you should avoid implementing the rest of the thing. That's the classic "throwing the baby out with the bathwater" thing. The benefits of a standardized package manager seem worth a few people complaining. I'm sure someone somewhere would probably even complain that they would rather be writing JavaScript or another, non-C++ language, no matter how good the C++ package manager is.

Go has proven that a good package manager can work without that feature. You say that feature is something the C++ Committee could never tackle. My whole statement has been "fine, learn from Go!" Instead, you keep harping on this nice-to-have feature and saying it can't be done.

Package management is solvable in a way that suits C++. It seems inevitable that standardized package management will eventually happen for C++.

I understand your perspective now, but I just don't think I agree with it.


> In contrast, if someone proposes "C++ should have a package manager", exactly _who_ will implement and maintain the canonical repo mirror? This is not independent lines of work that GCC, Clang, Microsoft, and Intel can do on their own. Even if we hypothetically extend the website "isocpp.org" to actually start hosting the canonical C++ repos instead of just blog posts about syntax proposals, _who_ is paying for it? Again, there is no single entity like Joyent/Mozilla/GoogleInc that raises their hand and says, "We'll set it up".

Literally no one is required to do any of that. That is the answer. Plain and simple.

> I think the disagreement is rooted in how we compare ISO C++ committee vs Google Inc. To me, releasing a C++20 language specification* does not say anything about implementing a canonical repo* so that a command line tool magically works the way people expect.

But I'm saying that's not how Go works at all. The dependencies are hosted on GitHub, GitLab, or wherever else.

There is no central package repo. There is no "canonical repo".

Package discovery tools are not very relevant to the discussion. You can have a package manager without having a discovery tool or a central repo. Searching GitHub to find a C++ package, then adding that repo as a dependency of your current project seems like it would be entirely reasonable, if C++ had a standard package manager that worked. Some community members might build a website to help you find popular packages... but that discovery tool doesn't interact directly with the packages at all.

proxy.golang.org is a proxy. No one publishes packages to it, and you don't have to use that proxy. You can use no proxy at all, which was the default once upon a time, or your company can host a proxy, or you can potentially find some random third party proxy online. The proxy isn't where packages are hosted -- it's just a means of accelerating downloads, if GitHub were slow, for example.

C++ code is hosted in a myriad of locations. The Go approach is to specify the GitHub repository that you're depending on, and that repo will be cloned by the package manager in your terminal. The `go.sum` file contains hashsums to verify that the dependency you downloaded is untampered with since the last time you fetched it, and those hashes can also be used by any proxy that happens to be used.

Go's package management system is truly distributed. It isn't centralized at all. Yet it still supports SemVer, downloading the correct, exact version of a dependency, checking the integrity of dependencies, recursively collecting dependencies of your dependencies, etc. All the features you would expect out of a package manager.

Unlike Cargo in Rust, someone can delete one of these repos from GitHub and cause a real mess. `go mod vendor` is an option for anyone who prefers to vendor their dependencies.

Google has certainly provided some nice web tooling around the Go Modules system, but none of it is integral to the type of package manager that I'm proposing would suit the C++ dependency model. Go's package manager is very distinct from what you were discussing with Cargo, NPM, and others. It's much more attuned to the problems that C++ faces, and it walked a similar path to what C++ will inevitably have to do.


Which honestly mostly works well, except when it doesn't. There are at least a few module/version combos where people have shifted a version tag on their repo over time, leading to the (unfortunate) reality that you end up with different modules depending on if you fetch it through a proxy, or directly. Not entirely surprising, this can (and will) cause build failures.

Source: I build lots of Go modules for fun (well, specifically, I have some automation to do it for me) and notice these things, when I get more failures than expected. http://github.com/vatine/gochecker for anyone wanting to play along from home (the 1.16 release report is on hiatus, as there was a LOT of things that didn;t work smoothly this time).


ABI makes this hard for c++

Build from source

Then give me a package manager that can handle cross-compilation well.

Currently, it seems almost nobody is taking that into account when packaging their wares for consumption by CMake, or distribution by Conan. Or if they do give it some thought, it always ends up making dubious assumptions, like "Clang == Linux", or "MSVC == Windows".


I have one at work that does okay. If your project is based on cmake I can create the package very quickly, though most projects still don't create a cmake configuration file. If your project isn't cmake - well at best a day, and often I will spend weeks fighting your half baked build system that doesn't understand the common things I want to do. (in practice autotools projects have the options to cross compile but it doesn't actually work...)

CMake + vcpkg [0] does cross compilation, I've used it mynself. It's pretty good!

[0] https://stackoverflow.com/questions/58777810/how-to-integrat...


Rust/Go seem to manage this just without issues.

I actually extremely dislike language specific package managers. I'm on Linux, the packages should be in my package manager. I don't want to maintain multiple package managers. nmp is actually the worst here.

> I actually extremely dislike language specific package managers. I'm on Linux, the packages should be in my package manager. I don't want to maintain multiple package managers. nmp is actually the worst here.

As a user of software that doesn't care how it's built, sure. But system package managers are not a solution for general development with C++, or any other language.

If I want to use C or C++ to create software, how do I use libraries that aren't available in a system package manager? What if I need a version of a library that's not available in my system package manager? There are answers here but they aren't good answers (build from source, using whichever of N build tools the project happens to use, or hope there are prebuilt libs hosted somewhere)

Relying on system package managers to contain dependent libraries makes cross-platform development a complete PITA (more that it already is). Now you need the specific versions of all your libraries in package managers on all platforms, which is a complete non-solution for real development.


The problem is more or less solved - see Nix.

It'll take some decades for the ideas to percolate, but language-specific package managers are definitely not the future.


> The problem is more or less solved - see Nix.

Sorry, but I'm really not sure what you mean by "solved".

Nix is yet another (language agnostic) package manager with certain tradeoffs. But, if there is not an available Nix package for a specific version of a library I need to use - I'm out of luck.

Nix is not a build tool designed to work with arbitrary or latest development versions of libraries, for example. And, it will never solve that problem even _if_ it is technically capable of doing so - because there is no force in the world that would get all projects in all languages to use it.


> Nix is not a build tool designed to work with arbitrary or latest development versions of libraries, for example.

No, that's exactly what it's designed for and exactly how we use it. And it's not a build tool (it just calls your existing build tools under the hood), it's a system for keeping different versions of dependencies installed at the same time without errors.

> because there is no force in the world that would get all projects in all languages to use it.

You can write your own wrappers for the projects that are missing. It's a simple and idiomatic process.


> Nix is not a build tool designed to work with arbitrary or latest development versions of libraries, for example

Well it sort of is and it sort of isn't. The great thing about Nix is how the provided packages remain malleable. You can usually quite easily make a small override to a provided package to make it build from a specific revision of the source you desire, or add a custom patch, and Nix will just build it all for you then & there. Then you can go and rebuild bits of the rest of the distribution that depend on that using your custom version. If you so want.


Exactly.

Also, i'm terrified of this idea of "library-manager download code from internet and run on this machine", without all the tests and QA of individual dependencies like we have in Linux packages.

Also, i've seen so many times people adding dependencies to projects because they did not know the standard library already had what they needed. I get it, it is easier to "pip install foo" than to look for "foo" in the docs. I don't think any sane person can learn everything that is available in the standard library, but searching the docs is always insightful.


The problem is that system-specific package managers are an obstacle to making portable programs.

Even within Linux and BSD there are many flavours of package managers with slightly different naming schemes for their packages.

This fragmentation makes it impossible to have dependencies that just work. You need to either make users install things manually or every author has to probe multiple package names/locations using multiple tools.

Language-specific managers support all of the OSes and just work, especially for users of macOS and Windows (telling people their OS sucks may state a true fact, but doesn't solve portability problems)


The Linux model of package management doesn't work for newer languages. In particular it is heavily reliant on dynamic linking, which tends not to work when you have (a) an unstable ABI (b) generics (c) a culture of static linking.

It works fine, you just ship the static libraries. With static linking your binaries won't have dependencies anyway.

That's not to say the static linking craze is a good thing. We'd be far better off finding a way to dynamically link templates, so you get the security benefits of automatically updated dependencies that dynamic linking gives you.


Code library managers don't belong inside OS package managers (because you want hermetic builds), unless maybe you have some Nix-live multi-manager that can provide many environments.

Great point. I think BSDs got this right with their ports. (Incidentally, NetBSD's pkgsrc supports Linux.)

> Rust has package management, but not the ecosystem yet. On the other hand, C/C++ has the ecosystem, but no standard way to easily draw from it.

I take your point, and I share your desire for a canonical, Cargo-like package manager and build tool for C++ (it's one of the reasons I pivoted out of C++ development); however, I don't think C/C++ "has the ecosystem" these days. It certainly has an ecosystem--C/C++ dominates its own niches, but there's a big world outside those niches and there aren't good packages for much of it. Meanwhile, Rust is growing like a weed both inside and outside of the C/C++ niches, and the package manager largely enables that rapid growth. Also, Rust has a good interop story for C/C++, allowing it to leverage the existing C/C++ ecosystem. Anyway, I hope this doesn't read as contrarianism--I just thought it was an interesting distinction.


Most of the places where C++ doesn't have good libraries I wouldn't want to use Rust anyway, that is the domain of managed languages, GUIs, distributed computing, Web development.

And for the stuff I use C++ for, COM/UWP, Android NDK, GPGPU shaders, Unreal/Unity, Rust tooling is yet WIP or requires to leave the confort of the existing C++ frameworks and IDE integrations.


What’s wrong with using Rust for “GUIs, distributed computing, web development”?

In the case of a GUI, I’d expect a modern Rust GUI toolkit binding to look like any other GUI toolkit binding: an FFI-like abstraction that parses its own declarative view format, and exposes handles from those parsed views for native controller methods to bind to. Y’know, QML, NIBs, XAML, those things. This kind of GUI toolkit doesn’t exactly have high requirements of the language it’s bound to. (And I don’t believe many people want the other, procedurally-driven kind of GUI toolkit in the year 2021.)

Re: distributed computing — I can see the argument for Rust being the antithesis of easy network “rolling upgrade” (e.g. via being able to recognize known subsets of unknown messages, ala Erlang); but pretty much all languages that support distribution are very nearly as bad in that respect. (Only the languages that have distribution that nobody else actually uses — e.g. Ruby, Python, etc. — are on Erlang’s side of the spectrum in this regard.) But in terms of pre-planned typed-message version migrations, Rust can do this more idiomatically and smoothly than many other languages, e.g. Go, Haskell, etc.

Re: web development — there’s actually a lot of activity in building web frontend SPAs using Rust compiled to WASM. Started with games, but has expanded from there. Not sure about web backends, but the argument is similar to distribution: you need to do it differently in a static compiled language, but of static compiled languages, Rust is really a pretty good option.


The productivity hit produced by having to deal with the borrow checker and the design constraints it imposes into application architecture.

I won't take a GUI framework without a graphical designer, or a component ecosystem from companies selling GUI widgets in 21st century.

Distributed computing, again when thinking about distributed calls a la Akka, Orleans, SQL distributed transactions, I rather have the productivity of a GC.

Web development with Rust is nowhere close to the stack provided by JEE, Spring, ASP.NET, Adobe Experience Manager, Sitecore, LifeRay, Umbraco, enterprise RDMS connectors, ...

Rust best place is for kernels, drivers and absolute no GC deployment scenarios.


> I won't take a GUI framework without a graphical designer, or a component ecosystem from companies selling GUI widgets in 21st century.

Well, yeah, what I’m saying with “these types of modern frameworks don’t impose very many constraints on the language” is that there’s no reason that Qt, UWP, Interface Builder, etc. can’t support Rust (or most other languages, really), because in the end the tooling is just generating/editing data in a declarative markup language, that the language’s toolkit binding parses. You don’t have to modify the tooling in order to get it working with a new language; you just need a new toolkit binding. Just like you don’t need to modify an HTML editor to get it to support a web browser written in a new language. Qt et al, like HTML, is renderer-implementation-language agnostic.

> Distributed computing, again when thinking about distributed calls a la Akka, Orleans, SQL distributed transactions, I rather have the productivity of a GC.

I think I agree re: the productivity multiplier of special-purpose distributed-computing frameworks. I don’t think I agree that it’s a GC that enables these frameworks to be productive. IMHO, it’s the framework itself that is productive, and the language being GCed is incidental.

But, either way—whether it’s easy or hard—you could still have one of these frameworks in Rust. Akka wasn’t exactly easy to impose on top of the JVM, but they did it anyway, and introduced a lot of non-JVM-y stuff in the process. (I’d expect that a distributed-computing framework for Rust would impose Objective-C-like auto-release-pools for GC.)

> Web development with Rust is nowhere close[...]

Web development with Rust isn’t near there yet, but unlike distributed computing, I don’t see anything about web development that fundamentally is made harder by borrow-checking / made easier by garbage-collection; rather the opposite. I fully expect Rust to eventually have a vibrant web-server-backend component ecosystem equivalent to Java’s.

> Rust best place is for kernels, drivers and absolute no GC deployment scenarios.

Those are good use-cases, but IMHO, the best place for Rust is embedding “hot kernels” of native code within managed runtimes. I.e. any library that’s native for speed, but embedded in an ecosystem package in a language like Ruby/Python/Erlang/etc., where it gets loaded through FFI and wrapped in an HLL-native interface. Such native libraries can and should be written in Rust instead: you want the speed of a native [compiled, WPO-able] language; but you also want/need safety, to protect your HLL runtime from your library’s code that you’re forcing to run inside it; and you also want an extremely “thin” (i.e. C-compatible) FFI, such that you’re not paying too much in FFI overhead for calls from your managed code into the native code. Rust gives you all three. (I see this being an increasingly popular choice lately. Most new native Elixir NIF libraries that I know of are written using https://github.com/rusterlium/rustler.)


I would use Rust for distributed computing and GUIs, and I wouldn't be surprised if it begins to break into the graphics/gamedev world in the next 5 years. Agreed that Rust is still immature in those areas today, but it seems to be on a pretty aggressive trajectory and it's only a matter of time before Rust begins chipping away in those domains.

I did some real-time embedded development (including distributed embedded) in a past life in C and C++, and I really expect Rust to break through in that domain in a big way even though it's incredibly conservative (C++ is still the new kid on the block). It will take some time and it's never going to "kill" C or C++ in that domain (especially considering all the hardware that exists that LLVM doesn't yet target), but I think Rust will carve out a swathe of the embedded space for itself.


Sure if you like to do stuff by hand, I rather use visual design tooling (think Qt Designer, Microsoft Blend) and have bigger fish to fry in distributed network calls than who is owning what, instead of using Akka, Orleans or Erlang.

> I rather use visual design tooling (think Qt Designer,

Oof, I did professional Qt development and Qt designer was basically a joke. Not sure if it improved, but I've never experienced a visual design tool that saved me time. Not that they can't exist, just that the implementation is usually too buggy to justify itself. I don't enjoy debugging XML that gets compiled to C++ (I think it's compiled, anyway--maybe it's parsed at runtime... I forget). In whatever case, if you build a visual design tool for C++, you can build one for Rust as well.

> bigger fish to fry in distributed network calls than who is owning what

Agreed that I don't think distributed is the sweet spot for Rust, but there are certain niches (high performance, low level, etc) where Rust would be valuable. Previously I worked in automotive which is basically a bunch of distributed embedded computers talking to each other over a CAN network, and Rust would have saved a lot of time and money. On the other end of the spectrum, you have high frequency trading where performance is so important that C++'s myriad problems are worthwhile, so certainly Rust could add value here as well.


> GUIs, distributed computing, Web development.

People are trying out Rust in all of these spaces. I don't see why Rust is fundamentally unsuitable to these domains.


Imagine something like Swift UI for Rust, including the live preview.

Now imagine how to implement such designer in a way that supports component libraries, without having the burden of using Rc<RefCell<>> everywhere, while allowing the user to rearrange the component widgets in any random ordering.


> C/C++

No such thing.


I’m not going to be baited into silly semantic debates. Good day, sir.

I stated a fact. C and C++ are different (and incompatible) languages.

Still not taking the bait. :)

Have you tried vcpkg?

https://github.com/Microsoft/vcpkg

It's a tool to manage C++ dependencies (using CMake), created and maintained by Microsoft. A lot of open source projects are supported (you can see part of the list here: https://github.com/microsoft/vcpkg/tree/master/ports).


I did, about a year ago. The usability was questionable.

Their main workflow appears to be, all developers use that thing, everyone building packages from source code and using their own binaries. For large dependencies that’s a large waste of time if more than 1 person is working on the software. It’s possible to export built libraries as nuget packages, but these are tricky to consume.

Another thing, these ports (where they applying patches to third party open-source code to squeeze them into vcpkg) are fragile. I remember cases when packages didn’t build, either at all, or subject to conditions (half of what I tried was broken when I only wanted release configurations).


I believe that's a recent development but vcpkg has decent binary caching now: https://github.com/microsoft/vcpkg/blob/master/docs/users/bi....

Edit: they also have an experimental support for registries since February of this year: https://github.com/microsoft/vcpkg/blob/master/docs/specific...


Without a standard ABI, having c++ binary packages is a huge pain, requiring multiple artifacts for every permutation of compiler, os, and platform. It's less painful today than in the past, simply due to fewer compilers, OSes, and platforms, but it is still a problem.

A common ABI doesn't save anything as we still need to build for ARM, and x86 (MIPS, RISCV are also out there and may be important to you). Those processors all have different generations, it might be worth having a build for each variant of your CPUs. Once you take care of that different ABIs are just a trivial extension. RPM and .deb have been able to handle this for years.

All developers on that team used 1 compiler (VC++ 2017 at that time), one OS (Windows 10), one target platform (AMD64). Compiler/linker settings are shared across developers.

I wanted vcpkg to export just the required artifacts (headers, DLLs, static libraries, and debug symbols), so only 1 person on the team (me) is wasting time building these third-party libraries. The team is remote, upload/download size needs to be reasonable.


I have been using it since at least 2018 to create NuGet packages of dependencies. You can also do zip and others.

> For large dependencies that’s a large waste of time if more than 1 person is working on the software.

but all other hype languages do this and you don't hear people complaining


I surely do, that is one reason that I don't play with Rust as much as I would like.

I am not buying new hardware for faster Rust builds, when it is perfectly fine for my C++ workloads.

The main difference is that for C++ I never compile third party libraries, always get them as binary.


Maybe a difference is that C++ can be very, very slow to build. And C++20 will likely results in even longer build time now that you have concepts and can have exceptions and allocations in a constexpr context.

What's a "hype language"?

Rust, Go, Julia, Swift ?

What makes them hype languages? Oh, so like new languages?

> What makes them hype languages?

being high in the chart here: https://insights.stackoverflow.com/survey/2020#technology-mo...


Ah, so hype == love.

hyped pretty much means loved new thing, no ?

I rarely hear it except in a pejorative sense.

Rust's package management is actually a downside to my adoption. I have a lot of C++, a home-grown package manager, and a large cmake based build system. Rust wants to replace all this, but that means shelling out to Rust's build system, which is a bit of a messy situation and means I need to learn a new build system with the language. (not hard, but another thing to learn). Our home grown package manager means we have our own local copy of everything - I have a hard requriement to be able to rebuild software for the next 15 years (there is a closet someplace with a windows XP service pack 1 PC so we can build some old software - God only knows if it will boot anymore). In the embedded space we need to support our old code for years, and you can't always upgrade to the latest.

"cargo vendor" enables you to embed your entire dependency tree into your repo instantly, and compilation will work from that. Over half of your comment seems to have been predicated on the assumption that this either wasn't possible or wasn't easy... so I think your perceptions of Rust's package management system are more of an impediment to you than the actual package management system.

While I will admit ignorance about the details of Rust, what you described doesn't solve my problem. We do not believe in mono-repo here, and have broken our system down into lots of small repos with custom tools to manage that. Checking in a copy of the dependency tree into each repo is not the right answer. I'm sure I can make this all work, but everything I've heard about Cargo is it will fight the way we have setup our system. We are not changing, while there are things I'd do different (use Conan - but that didn't exist until just after we rolled our system out, and is just enough different that it will be hard to switch), the system works for our needs.

Cargo will allow you to solve this problem half a dozen ways. I’m certain at least one of those ways would fit the patterns you’re describing.

But you keep coming back to this idea of how Rust has to fit your workflow perfectly and you’re unwilling to make any changes to have things work better with Rust...

If you’re unwilling to change anything at all, then it’s laughable to imply that you would use an entirely different programming language for anything, even if it fits your workflow exactly.

So, I just don’t see the purpose of this discussion. You’re basically saying that you’re not going to use Rust, no matter what Rust does or does not do. That’s neat?



Oooo... does Rust have a good way to do this with submodules instead of copies?

It sounds like you're implying git submodules are actually a good thing... I think you're the first person who has implied such a thing to me before. Everyone I actually know agrees that submodules are basically never the right solution or a pleasant solution.

But, to your question, no. Where would the submodules even point? The dependency source code artifacts are stored "immutably" (except for takedown notices or extreme abuse cases) on https://crates.io. They aren't git repos, and there's nowhere for git to point.


> Everyone I actually know agrees that submodules are basically never the right solution or a pleasant solution.

I generally prefer submodules to other solutions as I tend to fix / change a lot of stuff in the libraries that I'm using


Yeah: using submodules makes maintaining vendor patches (which, FWIW, I pretty much don't do and will move mountains to avoid... but like, I totally appreciate why people do them) really natural and easy. Like, you don't just want a copy of the code: you want to be able to participate in the development of the upstream code with the same level of tooling that they have, and submodules does that.

The approach here would be to declare the dependency on the git repo directly. Vendoring is still going to copy the stuff you're building into your project, but you'd keep those patches in the repository of the dependency, not on your vendored copy.

The key thing here is being able to do it through multiple levels of dependency, for which I see someone else provided me an answer that is actionable! \o/

People definitely have strong opinions on submodules, but it is nowhere near so one-sides: a ton of people hate them, and a ton of people swear by them. FWIW, all of the Rust libraries I use are available as git repositories. With many other package managers, I can tell them "don't use the upstream copy from the package repo: use the copy I have in this folder" in a trivial manner. I thereby don't really want "automation" around either downloading the code for me to mirror or for the submodules I want: I want to set it up and then configure it so it is all "honored"... and I could totally see the feature you talking about somehow only working one way (with automatic copies) instead of being flexible.

Rust allows you to override dependencies via the patch directive in your Cargo.toml: https://doc.rust-lang.org/edition-guide/rust-2018/cargo-and-...

Rust also lets you do that trivially, by saying "hey here is where the folder is".

Yes, it can pick dependencies from checked out submodules, or git URLs directly. It has ways to patch individual dependencies anywhere in the dependency tree, and multiple ways to mirror or replace the whole crates.io index. It's pretty flexible in this regard.

Yuck. This growing practice of bundling the world is a travesty.

I’m not here to argue one way or another on dependency vendoring. The person I replied to was making an inordinately big deal about how they keep code around forever and it compiles decades later, as if Rust dependencies were some ephemeral thing that would break your code by next Monday!

If they want to reproduce their workflow using Rust, cargo allows vendoring and many other solutions.


I think the barrier to entry in the problem domain for C++ is much higher than something like nodejs. Installing dependencies is the least of one’s worries there.

Also, how many dependencies are we talking about? Node apps have a million dependencies for, I think, stupid simple stuff that should just be reinvented in a given codebase. In a C++ app too many dependencies invites incompatible stylistic choices which I think will turn to a Frankenstein codebase.

In Go this isn’t a problem because of “go fmt” plus a simple language at its core.


No matter the implemented package manager solution, it has deal with different packages types: from a single class library (a single hpp file) up to monster libraries like ffmpeg.

In the case of ffmpeg, what the package manager should do? Download the sources and all its dependencies and build from scratch? This is very difficult and time consuming.

Because right now the alternative is going to the ffmpeg website, download and include the dll (and lib) or .so and a couple of .h files to your project. And that's pretty simple to me.


It's not that the package manager fixes the problem, it's that having 1 or maybe 2 or 3 canonical or popular package managers gets the implementer to fix the problem.

The implementer, who has extensive knowledge of their own build system runs that aspect and creates a package that conforms to a universally expected output.

It's an incredible difference going from C++, where you end up in the details of all kinds of repos and build systems, to something like C# with Nuget packages where it's a simple command or single click to start using someone else's code.


Consider that C/C++, being highly portable, has support for many platforms and architectures, including the possibility of cross-compiling.

I guess if a package manages works on all those architectures and platforms, then the implementer would have to support all of them, and it's not always the main objective.

Let alone if there are several package managers.


> guess if a package manages works on all those architectures and platforms, then the implementer would have to support all of them,

Other "highly portable" languages handle this by simply having the developer include a manifest of the platforms their library works for. The package manager only shows compatible packages for the targeted platform.


ffmpeg isn't a library, just so you know. libav* are the libraries that ffmpeg use, which are what you'd include. https://trac.ffmpeg.org/wiki/Using%20libav*

FFmpeg is the whole package including ffmpeg as a standalone program and its libraries.

From https://www.ffmpeg.org/about.html

"FFmpeg is the leading multimedia framework, able to decode, encode, transcode, mux, demux, stream, filter and play pretty much anything that humans and machines have created."

After that:

"It contains libavcodec, libavutil, libavformat, libavfilter, libavdevice, libswscale and libswresample which can be used by applications. As well as ffmpeg, ffplay and ffprobe which can be used by end users for transcoding and playing"


Just like the sibling comment, I would vouch for vcpkg, although for my use cases NuGET will also do.

Seems weird to have a language fill in the deficiencies of OSes wrt package management.

Conan is probably the flagship C++ package manager and supports multiple build systems including cmake. Nuget/vcpkg is also usable but does not come with build system integration.

Vcpkg has great CMake integration. Further, the model of Conan of distributing a bunch of binaries honestly seems like the wrong approach for C++ where you have to juggle all different sorts of compilers, triples, and ABIs. We use a completely custom toolchain which pretty much rules out Conan.

The one annoying thing about vcpkg, though, is that all packages are described in the vcpkg source tree. There are no “repositories”. Customizing or adding custom packages requires using the somewhat annoying to use overlay system.

I’d prefer some sort of hybrid between the two, with packages distributed as source code but pulled from a repository. I believe this is how Rust’s Cargo works.


Just to add a small detail: vcpkg also has Visual Studio integration, not only CMake.

And regarding repositories, since February 2021 vcpkg has an experimental support for them. You can read the spec here: https://github.com/microsoft/vcpkg/blob/master/docs/specific....


I've been using conan pretty easily. My biggest issue is that the recommended install method is via Python's pip

We'd also need the one operating system on the one architecture. Perhaps the central planning committee can make that a goal for their next five year plan?

Python, Ruby, Node, Go, and Rust all work on multiple OSes and arches.

Python and Ruby only work on the Python and Ruby interpreters, respectively, and those require an OS-specific way to install them. They work on multiple OSes and architectures as well as JavaScript or HTML.

Go and Rust work only on a very very limited set of OSes and architectures. That's fine if you're targeting one of those, but it turns out the vast majority of computers in the world are not vanilla rice-pudding desktop systems or vanilla rice-pudding desktop systems adapted for the server room. The argument that some other tool solves a limited set of problems with your tool in a limited and limiting way is a poor one if you're trying to promote a universal solution.


Java runs on many platforms (and billions of devices as Sun used to love to point out), yet packaging and dependency management are pretty much solved problems.

Where there's a will, there's a way. In the C/C++ community there's no will. It's time they admit that to themselves and everyone else.


Java runs only on the Java interpreter. C and C++ run on the bare metal of the CPU.

Is Java modeled around a central repository of all dependencies so users can download random binaries off the internet?


Yes, and has been so for at least 17 years: https://search.maven.org/

But it's not necessarily centralized, Java package management tools can use many third party repositories, if needed, and can also use proxy/cache/mirror systems where for example a company can point all their package manager just to their official company repo and everything goes through it.

BTW, Java's not interpreted, it's compiled. Just not to native code.


Some years ago I would have thought all this would be really cool. But who are they kidding? What sort of people will be able to keep this whole language in their head.

C++ books were thick bricks already 20 years ago, and students struggled hard to learn it. Now the language is like 3x as complex. Students are going to need a separate bag just for their C++ material.

Sure you can write in a subset of C++ that is easy to get. But when did that ever work? Who has worked in a company and seen people able to stick to a minimal C++ subset?

No, people get tempted and they start using all the new stuff. Short term it is a real gain. But once you hire junior developer who has to read this code, they suddenly have 3x as many concepts to learn and understand.

I predict a serious recruitment problem with C++ down the road. Old timers today will start using all the new features. When management start trying to add new team members they start realizing that it is really hard to get quality C++ developers.

Anyway who tries Go, Rust, Swift, Nim, D or some other moder/semi modern language are going to ask themselves why on Earth they would want to torture themselves with C++.


It is easy to know why the world's highest-paid programmers, coding for the world's most demanding applications, use C++ and nothing but C++: nothing else is even trying to be useful in those applications.

C++ has sharp edges and pitfalls to stay clear of, so users ... do stay clear of them.

A usable, better language would gain users. But nothing is even on the horizon.

Rust is closest, but its designers have consciously chosen not to support the most powerful of C++ features, to try to keep the language more approachable. Yet, Rust complexity is already beginning to rival C++. Some of that complexity is in how to work around the language's deliberate limitations. As Rust matures it will suffer from unfortunate early choices in precisely the way C++ has, and will only get more complex.

Every choice in the C++ design has been to provide better ability to capture semantics in libraries, so that independent libraries integrate cleanly with each other and the core language. People can use libraries with confidence that they are giving up no performance vs. open-coding the same feature.

Access to the most powerful libraries depends on language features no other language implements. Thus, the best libraries will only ever be callable from C++ programs. With (literally!) billions of lines of code in production use, abandoning interoperability is not a choice to take lightly.

When you start a big project, you never know what it may come to need. If your language "won't go there", your program won't, either, and you will be stuck with unpleasant choices. This is the concept of a language's "dynamic range", a more meaningful measure than "high" or "low" alone: how high can it reach, how low can it reach, how far can it reach, at once? C++ is king of dynamic range. Nothing else comes close, or is really even trying.


> It is easy to know why the world's highest-paid programmers, coding for the world's most demanding applications, use C++ and nothing but C++: nothing else is even trying to be useful in those applications.

There's no proof of this. The world's highest-paid programmers tend to work for FAANGs and a few other categories of businesses, and they might or might not work in C++, and they tend to move up the ranks by being able to scale humans (other devs), not raw tech.

It's a myth that being an über-geek is well paying, by the way.


I will be sure to pass that fact along to all the well-paid über-geeks I know (who will be quite surprised at their misapprehension).

But there is no necessary relationship between "the world's highest-paid", and your notion of "well paying". You could be simply wrong, or your measure of "well paying" could exceed what the actual "highest-paid programmers" cited get.

Dan Luu did a good essay about programmer compensation a few years back.


Could you provide some examples of C++ features that the Rust team has consciously chosen to not support, to try to keep the language more approachable?

Could you show some examples of how you need to work around these deliberate limitations?


Operator overloading. Standard library user-provided allocators. Move constructors. Inheritance. Certain kinds of specialization. SFINAE. Somebody who knows Rust better, and C++, will be able to supply a longer list.

There is a corresponding list of features C++ doesn't have yet, and others it is precluded from having. That programmed move-constructors can fail sucks. Thst moved-from object's still exist sucks.

Providing examples here would be more work than I am prepared for just now. (I am not happy to say so.(


* Operator overloading: already in Rust

* Standard library user-provided allocators: in nightly, on their way to stable

* Move constructors: not in for technical reasons and performance reasons, not for approachability

* Inheritance: not in for technical reasons combined with a lack of demonstrated need rather than just desire, not for approachability

* Certain kinds of specialization: you're hedging with "certain", but specialization is in nightly, and used in the standard library.

* SFINAE: Rust doesn't use templates, so this as a direct feature doesn't make sense. I'm not aware of any proposal to include something similar in Rust, the team has never said that this wouldn't be in for approachability

> Somebody who knows Rust better, and C++, will be able to supply a longer list.

I don't think your thesis is accurate, so I don't think so. And if this is so obvious, as you claim, then you should be able to provide examples!


There are good alternatives to C++: Rust and D. There is a number of languages with not quite as high but decent performance and varying expression power: Java, OCaml, Go, even Fortran for numerical stuff (not a joke; modern Fortran is quite advanced, and most likely runs faster then C++).

I see rather few reasons to start a new project in C++ in 2021, even though in some niches nothing else is viable, sadly.


This one-page format using "concept" -> "example" -> "reasoning" is fantastic for people like me who used C++ a lot in the past, and haven't touched it* in decades but still want to keep up to date.

It probably helps that the author understands this enough to ELI5. So Thanks Oleksandrikvl whoever you are.

* And by "touched it" I mean used its deeper features, not just STL containers and simple classes (and for/auto). (I still use it for TFLiteMicro, but generally I see that most users are topical C++ programmers, like me.)


> This one-page format using "concept" -> "example" -> "reasoning" is fantastic

I agree, but I don't think that's happening here. It's documenting C++ "Concept" which is the technical name for a certain part of the C++ language.

It's a great article though.


I was really looking forward to concepts.

But the actual implementation seems like a syntactic and ( partially) semantic mess to me.

Obscure syntax (`requires requires`), soooo many different ways to specify things, mangling together with `auto`, mixing of function signature and type property requirements (`&& sizeof(T) == 4`), etc etc.

This reeks of design by committee without a coherent vision, and blows way past the complexity budget I would have expected to be spent.

Rust (traits), Haskell (type classes) and even the Nim/D metaprogramming capabilities seem simple and elegant in comparison.


The original C++0x concept proposal had proper type signatures and was based, I think, on more traditional type theory. But it had to be continually tweaked as it did not work well in practice so it grew in complexity a lot. Additionally the only implementation was extremely slow to compile.

It was taken out of the standard, and the new version (aka concept-lite) is actually much simpler, although expression based. We lost the ability to type check template definitions though.

Far from being a design by committee, I think for the most part is the brainchild of a single author. The 'auto' thing is definitely a committee addition as many vetoed "implicit" templates and requiring auto after the concept name in the shorthand form was the compromise that pleased no one [1].

[1]: this is an obvious manifestation of Stroustrup's Rule


I haven't been following C++ for quite a while but when I did, I wanted modules. And now it looks like they're here and they've done it wrong. Or at least missed an opportunity to do it really right.

They've done the equivalent of * imports in languages like Java and Python. And style guides in those languages universally recommend against doing that.

Why? With named imports, if you see a symbol anywhere in the codebase, its declaration is somewhere within the file itself. If you see a call to foo(), it's going to be either a local function or a declared import. With C++ modules (as with C++ includes) it could come from any of the imports, so you have to look outside of the file to figure out where it came from.

Sure, IDEs help paper this over somewhat. But it just seems sloppy for a post-1980s language feature to throw all imports into the global namespace.


That's because modules and namespaces are the same thing in languages like Python, whereas they are separated in C++. The code in the imported module will go into whatever namespace it is in within that module, not the global namespace.

I had forgotten about C++ namespaces. It's been quite a while.

I'm not sure that this addresses my concern though. Do namespaces enable the import of specific symbols from a module?


I found the cppcon video on c++ 20 features to be very informative and I am honestly excited to use ranges and other items mentioned, unlike the other guys on here who hate progress.

https://youtu.be/FRkJCvHWdwQ


“There are only two kinds of languages: the ones people complain about and the ones nobody uses.”

As somebody who uses c++ daily and spends a lot of time compiling, I am really excited about modules. Unfortunately, the latest even unreleased g++ v11 and clang v8 only say they support them partially. Does anybody have any experience trying them out? Do they work,l and are they ready for production use?

i tried to evaluate whether they would help build times at all. this is the only reason i want them. based on some research, they don't if you have parallel builds (since a module must be built before its users/not in parallel). so i think this lessened excitement from users/compiler devs/build tool makers. and it was enough to convince me to just buy more cores to parallelize builds more rather than put in work for half baked modules

There are already precompiled headers. They are a big speedup as far as I know, but semantically problematic. Maybe if you use them, you will not see a speedup.

Haven't been keeping up. Is there a module solution yet. #include is cursed.

They have Modules in the table of contents and when you read that section it talks about the downsides of #include. Is that what you mean?

https://oleksandrkvl.github.io/2021/04/02/cpp-20-overview.ht...


Glory be! I knew it had been on the table since like 0x but wasn't sure if it was ever going to make it in.

Why is it cursed? Package managers are the wrong way to go for C++, IMHO. Just look at the bugs and bloat that every package manager is suffering right now. Rust isn't far behind. Keep C++ away from this.

EDIT: I just learned module systems are NOT package managers.


1) Headers cause programmers to take dependencies without realizing. (Especially when unity builds are setup).

2) They are fragile because they involve putting file paths into the code.

3) They cause build information to be in the code rather than with the other build information. To find out what is really being consumed I have to search every code file.

4) They cause all kinds of issues for beginners such as multiply defined errors.

5) They cause the compiler to revisit code hundreds even thousands of times bloating build times. (This is such an extensive problem a small industry has sprang up to address it i.e. precompiled headers, unity builds, fastbuild, etc)

6) They introduce confusing bugs (someone modifies a header in a dependent library but not the dependency, literally had to fix this for a 10 year+ game programmer at a studio you would know. Turns out adding virtual functions in a header will cause an off by 1 vtable lookup and hilarity insues.)

> Package managers are the wrong way to go for C++

I didn't say anything about a package manager. Modules don't require package managers. In .net you can use Nuget or not but the complier understands how to take source and an assembly and hook them up.

I just want a sane way to tell the compiler to build one thing and then use that when it builds the next thing. Rather than this weird concept that every TU has to stand completely on its own.


A module system and a package manager are two separate things.

> Just look at the bugs and bloat that every package manager is suffering right now.

And your assumption that the lack of a package manager reduces bugs and bloat is based on what scientific proof? :-)


Getting rid of the pre-processor would be a radical evolutionary step for C++. You would have a devil of a time interfacing w older code, esp. C code. At that point, just start using a different language altogether.

> Getting rid of the pre-processor would be a radical evolutionary step for C++.

I said nothing of the kind! Obviously, that is not tenable at this time. But it's entirely possible to add a module system by which new code can take dependencies without the cumbersome #include mechanism.


True. But adding a module system is a step in the direction of eventually rendering the preprocessor obsolete. Maybe C++30 :)

I actually think modules-ts is worse than include...

That's possible. I can't say all module system implementations are better than #include but I've used some good ones that definitely are.

If C++ was an octopus made by stapling extra legs to a dog, it's now just a giant pile of legs, and nobody's heard a bark in decades.

Alternatively, C++ is an extremely mature language that has evolved through a painstakingly well considered process involving some of the brightest minds in computer science across multiple decades.

It continues to deliver on the promise of providing the structure you want, without any undue runtime cost.


My criticism stems more from C++'s steadfast refusal to drop backwards compatibility, in any way, for anyone, ever -- while also adding new features. What this means is that new features can't provide the guarantees they can in other environments leading to "ruined fresco" [1] syndrome.

Concrete example: std::move. Move constructors can copy, and `std::move` doesn't move. Naturally, it just casts your T onto `std::remove_reference_t<T>&&`. Because why not. It also leaves your source object in an undefined but totally accessible state -- whose validity is up to the implementor's convention! I think std:: collections are totally useable after they've been moved (correct me if I'm wrong) but your own types may just explode or fail silently. Talk about a giant footgun.

This approach leads to poor imitations of features from other languages getting stacked on top of the rickety footbridge that is K&R C.

It's specifically the evolutionary design philosophy that I take issue with.

The language has become borderline impossible to reason about. Quickly, what's the difference between a glvalue, prvalue, xvalue, lvalue, and an rvalue?

And the compiler, in the name of backwards compatibility, sets out not to help you because adding a new warning might be a breaking change. I've got easily 15 years of experience with C++ - granted, not daily or anything. To figure out what's actually happening, you need to understand 30 years of caveats and edge cases.

[1] https://www.npr.org/sections/thetwo-way/2012/09/20/161466361...


> My criticism stems more from C++'s steadfast refusal to drop backwards compatibility, in any way, for anyone, ever -- while also adding new features.

Languages that break backwards compatibility tend to have very slow uptake of the new versions. Python 3.0 was released in 2008 and took at least a decade to become the main version. And the changes made to Python were minor compared to what would need to be done with C++.

> The language has become borderline impossible to reason about.

This I agree but mostly it doesn't affect casual users of the language. I drop into C++ every 5 years or so and I don't find it difficult to understand or be productive. I have no idea what the difference between glvalue, prvalue, xvalue, lvalue, or rvalue but it's mostly not a concern for me.


As someone who creates production code in assembly, C, C#, and Java (among others), but who doesn't have that much experience with C++:

C++ certainly seems like a fragmented language from the outside. Lots of features added over the years to address problems with safety and provide additional "zero overhead" abstractions. The style and idioms of code written in this language seems to have changed pretty significantly over its lifespan. So breaking backwards compatibility to throw out old standards and force programmers to utilize new ones seems to make sense. However, it raises a few questions.

1) Who decides which parts of the language to throw out and which to keep? How do they decide this? Would the goal be to keep the multi-paradigm concepts, or re-focus the language? Which of the "zero overhead" abstractions should be kept?

2) Has this already been tried before in essence? There are certainly a number of languages out there that seem to strive to be "a better C/C++". What benefit is there to attempting to create a C++ 2.0 instead of using one of them?

3) Do the benefits of breaking backwards compatibility really outweigh the loss of all of the accumulated libraries and all the software of the past 30+ years? Even with ideal management of the new language, would it be enough to bring people to a new version?

4) Do you continue adding to this new version as you did with the previous one... surely that would eventually lead to the same fragmentation seen in the current version.

5) What happens to C++ 1.0 in this case? Do you continue to support and expand it? For how long? I suppose one could look at what happened with Python, but I'm not so sure it's that comparable.


If it wasn't backwards compatible then it would be a new language. Compilers keep adding new warnings all the time. If someone's build is broken by -Werror then they should disable that warning, if it isn't relevant to them.

Certainly the C committee considers standardizing warnings to be a breaking change so that's not always true. [1]

Re: backwards compatibility, that's not really true. ABI compatibility is different than source-level compatibility. If a library or module is built to one language standard, so long as the ABI remains compatible, I think it's fair game to change syntax and semantics when compiling with a newer language release - especially when there's clear and obvious deficiencies in the existing. Obviously, the committee and I disagree on this.

However, my point remains that if you value backwards compatibility above all else, and it's that backwards compatibility that actually prevents you from adding features in a complete and honest way, maybe don't add the feature. Like, if `std::move` is the best you can muster, don't add it! It's not a move! I don't know what it is, but it's definitely not what the label on the tin says.

[1] https://thephd.github.io/your-c-compiler-and-standard-librar...


Backwards compatibility is the reason why C++ became what it is today, and why it prevailed over other (similar/better?) languages designed at the time. Herb Sutter himself discusses this in the talk here: https://herbsutter.com/2020/07/30/c-on-sea-video-posted-brid...

There are plenty of languages that have done a clean break form C or older C++ if you want them. Most of them have already been forgotten.

This is the ultimate proof that the C++ approach has worked.

Sure. Strongly agree.

A few languages have succeeded, for example C# and Java are arguably C++ successors although they do not cover all the same domains.


> but your own types may just explode or fail silently.

I mean, that's the point of them being "your own types". If you couldn't do anything that you want, including putting `assert(1 == 2)` in any method of your own type in C++... then people would be quick to design a Cwhatever language where you can, because it's a useful subspace of the design space of programming languages


what a snarky and unproductive statement

Is is all of that, and kind of funny too.

agreed :)

That's funny. That's not a great description of the reality, but you can't get make a comment both accurate and funny at the same time.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: