I've never actually done it, but my understanding is that if you want to change something in C++ you have to
- Make the changes and have an example implementation.
- Make a paper explaining the change and why it's necessary, using the C++ paper-template/documentation format.
- Post it to a listserv for comments, hope that people like your idea after all the time you’ve already invested.
- Attend a meeting, possibly in person (!), maybe remote, to make sure that the committee addresses it.
- Your change will make it into C++26, which means it doesn't percolate into C++-for-the-average-person for 5 to 10 years.
C++20 is a very nice language but I really wonder if it can survive competing with the ecosystem that Rust has. Between the niceness of cargo for package management, allowing for breaking changes, and being open to contributions, it just has a velocity that will make it solve problems with the language in a timespan that C++ can't.
It's a fun exercise to think about how you would do C++ fresh starting with ansi C, given what you know now based on the actual experiences of C++, Java, D, Rust etc. C++ has had a tendency to attempt to solve the current problem while being tightly coupled to it, problems which may not even exist later. I submit STL allocators as an example of this.
What's the best way of enforcing taste? Slow down the changes, put barriers in the way? That's horrible, sure, but is there something better? Benign Dictator maybe? Bjarne's taste has improved considerably (and I'm sure I could have made a bigger hash of it than he did starting out too).
Isn't C++ an example of a language that moves slowly and still has lots of bolted on features? Having used it a bit, I would use Rust as an example of a language whose features are very thoughtfully considered and not "bolted on" at all.
If we're just looking at C++ and Rust and drawing conclusions, I would expect the conclusion would be "moving super slowly correlates with more bolted on features, not fewer".
I think it's more likely that Rust just benefited from lots of hindsight with respect to C++'s mistakes.
Note also that since C++11, the language has been evolving quite quickly and the features released since have been some of the most thoughtful.
By 1985 C++ has a published book describing the language (which I still own somewhere, it's terrible) and a commercial implementation as well as Stroustrup's code.
While in June 2011 Rust isn't even "pre-alpha" and is just learning how to bootstrap.
For example Rust's GPU support is not great right now - and it may very well feel bolted on a few years down the line.
Rust has it's dangerous, unsafe warts. As someone learning rust, finding out that string slices can cause panics was utterly shocking to me. string slices should have had character indices not byte indices. This was a feature that was NOT thought of carefully. It completely goes against Rust's philosophy as a "safe", catch-errors-at-compile language.
Now, despite the big changes of C++11 as the watershed it has slowed and is doing better for it. Correlated? Maybe. Good taste should also slow the rate of change of "What do I have to deal with as a programmer in this language?" Also if the original language design that people found good enough to use was done really well then it shouldn't need too much change on top of it?
Point taken, you can use C++ as an example for anything. Even opposing points. It really has it all! (Like it or not)
Not many languages take C's ultra-high-inertia approach. Perhaps Scheme and Forth.
Most people using it will probably also claim it is a feature rather than a handicap.
Reminder that Ada already had modules (Packages) almost a decade ago.
I don't know Ada as well as I know C and C++, but I believe Ada lands somewhere between C and C++ in terms of how fast-moving it is. Makes sense that Ada is pretty slow moving, given its purpose.
> too much feature churn
Pick one, please.
Getting more eyeballs on a language change and getting language users involved in its development leads to more strength than simply spending more time on each change.
From what I can tell the vast majority of changes to Rust are small things like "Adding a method to Vec". These aren't really the type of things that turn into cruft/ radically change the language, but we also don't have to wait years for them.
The speed of change before it stabilizes has a strong inverse correlation with the number of features that don't fit well, and after it stabilizes I don't see much of a correlation overall.
Then again, you'll just point to how awesome it is that Rust doesn't change and stays stable and easy to use for years.
It's all just rationalizations for the emotional investments you've already made.
C++'s standards process and pace haven't stopped the stdlib from being filled with bad implementations of primitives that people have to replace with libraries like Boost or custom per-vendor implementations - why do you think so many companies (game studios, facebook, etc) ship their own versions of the C++ STL, or prohibit the use of the STL entirely? Why do so many libraries ship with a C++ wrapper over a C public interface instead of the opposite? Why do binary-only libraries written in C++ so often ship with unique builds for various compilers instead of a single binary like a C library? The main answer to all of these questions is that C++ is deficient by design in all sorts of ways - underspecified ABI, low-quality stdlib specs, bad encapsulation, bad performance characteristics, etc.
C++ is just losing here.
One compelling example could be the direction MS has started to take w/ .NET Core/5/6+:
- Submit issue/PR on the dotnet/aspnetcore repo on GH (anyone can do this).
- Conversation starts almost immediately.
- Suggested changes could be merged in a matter of weeks.
- Expect your changes to become available in an official release in under a year. Pre-release builds much sooner than that.
- Expect an LTS release the following year.
I have personally experienced this workflow a few times. It is incredible to have this type of feedback loop with something that has historically been a total black box of proprietary decisions.
There are still a ton of places where you don't have much leverage, but the fact that there is any movement possible from the community at all is incredible to me (again, from a historical perspective around Microsoft & .NET).
I don't have to learn how to draft a PhD thesis in order to add value to the ecosystem. Ideas should not be filtered so harshly in my opinion.
This seems really obvious, but it's a pattern that repeats with engineers engaging in what is only partially an engineering effort. Similar issues occur in the python-ideas list other places. When you follow this approach you can remove the risk of premature investment in the steps listed.
Also you don't have to be in the meeting in person, however a paper without author or somebody else who can argue for it will have a hard time to get the attention considering the number of papers in the queue.
Both strategies have advantages and disadvantages.
For GCC it is safe to link together any combination of objects A, B, and C. If they are all built with the same version then they are ABI compatible, the standard version (i.e. the -std option) doesn't make any difference.
The difference is that editions are a first-class, supported mechanism for managing interoperability between code targeting different "standards versions". It's not up to the user and the linker to conspire so that things work out, it's part of the language, and you don't need to limit your public interfaces / generic code to the lowest version.
(I don't know if this is the best policy? It seems like it could be a big burden on implementations in the long run.)
Cute for toy projects but a big disadvantage for any another kind of programming. There's a reason established languages like C++ or C# are taking compatibility very seriously.
You may say I'm a D-reamer, but I'm not the only one...
This is very respectable and reasonable for Rust and many similar minded open source projects.
Of course there's another perspective on this as well. E.g. as a business owner I'd prefer languages with more predictable promises of stability and sustainability as C++, Java, C# or Go.
Again, this is all fine and in my opinion suites Rust very well. The reckless and short sighted evangelism of some Rust promoters is the one thing that doesn't fit well into this picture.
Personally the line for me is when it comes to copyright assignment. I'm only okay with it if I'm also paid.
is it really labor if someone is doing it as a hobby. Its not free if they are getting enjoyment out of it.
Its not upto you to decide what constitutes a hobby though.
Person doing it can decide for himself if they see it as a hobby.
If an activity would typically be considered a work for compensation by society's norms, it's not reasonable to call it a hobby.
A person can write software as a hobby (personal use, or maybe use personally/within a small circle of friends) and, at the same time, write software for use by others in commercial endeavors (work). The fact that they may not do the latter in exchange for money doesn't make it a hobby.
woodworking in my garage can't be called my hobby because most woodworkers get paid for it?
"This is exactly opposite of how it works in a typical company, where the goals come from above"
They have the same problems, people joining in sales, marketing, finance, product management and all bringing in their own ideas. Often this is not managed appropriatly and just put into a backlog instead of "saying no to ideas when they would not fit with other ideas, or [..] lots of discussion to align ideas to make them fit".
The two camps seems to have determined that the two languages are aimed at different types of developers and the there is a place for both languages.