Hacker News new | past | comments | ask | show | jobs | submit login
Modern C++ Won't Save Us (alexgaynor.net)
326 points by neptvn 7 months ago | hide | past | web | favorite | 388 comments



A significant issue I have with C++ is that even if your code base is pure C++17, the standard library is a Frankenstein's monster of legacy and modern C++ mixed together that required many compromises to be made. A standard library that usefully showed off the full capabilities of C++17 in a clean way would have to jettison a fair amount of backward compatibility in modern C++ environments.

I've noticed that more and more people like me have and use large alternative history "standard libraries" that add functionality, reimagine the design, and in some cases reimplement core components based on a modern C++ cleanroom. I've noticed that use of the standard library in code bases is shrinking as result. You can do a lot more with the language if you have a standard library that isn't shackled by its very long history.


Because C++ is my primary language, and I always work on my codebases alone, I dropped the standard library and implemented my own replacement. It's not at all practical for most I'm sure, but it allows me to evolve the library with new revisions of the C++ standard without being absolutely fixed on backward compatibility.

One of the things I did for safety is that all access methods of all of my containers will bounds check and throw on null pointer dereferences ... in debug and stable mode. And all of that will be turned off in the optimized release mode, for applications where performance is absolutely critical. The consistency is very important.

Whenever I get a crash in a release mode, I can rebuild in debug mode and quickly find the issue. And for code that must be secure, I leave it in stable mode and pay the small performance penalty.


> A significant issue I have with C++ is that even if your code base is pure C++17, the standard library is a Frankenstein's monster of legacy and modern C++ mixed together that required many compromises to be made. A standard library that usefully showed off the full capabilities of C++17 in a clean way would have to jettison a fair amount of backward compatibility in modern C++ environments.

Not to mention C++ does not really provide the facilities necessary for convenient, memory-safe and fast APIs[0].

And as demonstrated by e.g. std::optional the standard will simply offer an API which is convenient, fast and unsafe (namely that you can just deref' an std::optional and it's UB if the optional is empty).

[0] I guess using lambdas hell of a lot more would be an option but that doesn't seem like the committee's style so far.


> (namely that you can just deref' an std::optional and it's UB if the optional is empty).

if that was not the case, `optional` would get exactly zero usage. The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.


> The point of those features is that you build in debug mode or with whatever your standard library's debug macro is to fuzz your code, but then don't inflict branches on every dereference for the release mode.

That's completely insane. If there's always a value in your optional it has no reason to be an optional, if there may not be a value in your optional you must check for it.


Sure, but that's not the issue. You should be using a std::optional like e.g.

   if (my_optional) 
         do_stuff(*my_optional);
Here's one (explicit)conditional.

However if the dereferencing, *my_optional, should be safe, it too would need to perform a conditional check behind the scenes. But it doesn't - as C++ places that on the programmers hand to not sacrifice speed


This is solved in Rust by letting you test and unwrap at the same time:

    if let Some(obj) = my_optional {
        do_stuff(obj);
    }
>However if the dereferencing, my_optional, should be safe, it too would need to perform a conditional check behind the scenes. But it doesn't - as C++ places that on the programmers hand to not sacrifice speed

So basically that turns C++ optional types into fancy linter hints which won't actually improve the safety of the code much.

I understand C++'s philosophy of "you pay for what you use" but that's ridiculous, if you use an optional type it means that you expect that type to be nullable. Having to pay for a check is "paying for what you use". If you don't like it then don't make the object nullable in the first place and save yourself the test. That's just optimizing the wrong part of the problem.


You can also do it in a one-liner in C++ if you're using shared pointers:

    if(auto obj = my_weak_ptr.lock())
    {
        do_stuff(obj);
    }
> Having to pay for a check is "paying for what you use". If you don't like it then don't make the object nullable in the first place and save yourself the test.

The point is that I can choose _when_ to pay that cost (e.g. I can eat the nullability check at this point, but not at this point, and I can use more sophisticated tooling like a static analyser to reason that the null check is done correctly).

Is it more error prone? yes. Does it allow for things to horribly wrong? yes. Is "rewriting it in rust" a solution? No. If I want to pay the cost of ref-counting, I can use shared/weak ptrs.


The rust code in question is not using reference counting.


Rust's borrow checker is like compile-time reference counting. Same benefit, but no run-time cost.


> So basically that turns C++ optional types into fancy linter hints which won't actually improve the safety of the code much.

C++'s optionals are less "safer pointers" and more "stack-allocated pointers" (nor to be confused with pointers to stack allocations).


C++ gives you all the options as usual.

  do_stuff(my_optional.value())
Is also safe, it throws if the value is absent, the safety check is performed behind the scenes.

But people might not want to throw an exception, so

  if (my_optional) 
         do_stuff(*my_optional);
Must also be allowed. The consequence is someone can also just do

  do_stuff(*my_optional)
No safety check is done and you get undefined behavior if the value is absent.

I don't know rust so I suspect it has a language construct which c++ lacks that prevents you from doing

  let Some(obj) = my_optional 
  do_stuff(obj);


Yes, you have to use if let, not let. That code would be a compiler error. (Specifically, a “non-exhaustive pattern” error.)


Hence going back to the original issue I pointed:

> C++ does not really provide the facilities necessary for convenient, memory-safe and fast APIs.

> You should be using a std::optional like e.g. […] if the dereferencing, *my_optional, should be safe

And once again a terrible API puts the onus back on the user to act like a computer.


Nah. In simple cases like that, the compiler would always be able to optimize away an extra check, if such a check were present. After inlining operator bool and operator *, it would look something like

    if (my_optional->_has_value)
        if (my_optional->_has_value)
            do_stuff(my_optional->_value);
        else
            panic();
 
and the compiler knows that the second if statement will pass iff the first does.

On the other hand, if the test is further away from the dereference, and perhaps the optional is accessed through a pointer and the compiler can't prove it doesn't alias something else, it might not be able to optimize away the check. However, that probably doesn't account for too high a fraction of uses.


How is that different to this?

    if (my_pointer != NULL)
        do_stuff(*my_pointer)


Native pointers do a bunch of different things depending on the context. In contrast, optional has clear semantics.

For instance, the ++ operator doesn't work for std::optional. For a native pointer, you just have to know (how?) not to use it.


In terms of generated coded, it is exactly the same. But that's not the point of optional types.

The point of optional types is to force you to write checks for undefined values, otherwise your code will not compile at all. In the old-fashioned style of your example, you might forget to check for the possibility of a null pointer/otherwise undefined value, and use it as if it were valid.


But the whole genesis for this comment chain is that you can make exactly the same mistake with std::optional.


Only if you deliberately unwrap the optional, which means you either don’t know what you are doing (in which case no programming language feature will be able to save you), or that you’ve considered your options and decided you want to enter the block knowing some variable can be undefined.

IMO, that’s not the same as not having optionals at all, and writing unconditional blocks left and right that may or may not operate on undefined values. It super easy to just dereference any pointer you got back from some function call in C++, without paying attention. Optionals force you to either skip the blocks, or think about how to write them to handle undefineds. Also, it’s ‘code as documentation’ in some sense, which I’m a big proponent of.


> Only if you deliberately unwrap the optional

"Deliberately unwrap the optional" is the exact same thing as "deliberately unwrap the pointer", you just deref' it and it's UB if the optional / pointer is empty.

C++'s std::optional is not a safety feature (it's no safer than using a unique_ptr or raw pointer), it's an optimisation device: if you put a value in an std::optional you don't have to heap allocate it.

> It super easy to just dereference any pointer you got back from some function call in C++, without paying attention.

And optionals work the exact same way. There's no difference, they don't warn you and they don't require using a specific API like `value_unchecked()`. You just deref' the optional to get the internal value, with the same effects as doing so on any other pointer.


I agree with this and would take it a step further, and say that recent changes to the STL are the worst parts of modern C++. For example std::regex supports 6 distinct syntaxes, the PRNG stuff is massively over-engineered, the "extensions for parallelism" add complexity without giving enough knobs for any real perf improvement. Meanwhile there's gaping holes like UTF-8 support. It's a sad state.


How is the prng over engineered? I agree its a little clunky for casual use but it makes all the right decisions, imo, for serious use of prngs (e.g. reproducible experiments for Monte Carlo methods in simulation and statistics)


Initializing the mersenne twister is really hard: https://github.com/PetterS/monolith/blob/master/minimum/core...

Edit: There are two links in the code with more info.


Most other random libraries, in whatever languages, do not even offer the option of this "right" version of initialization. They all just seed with a 32-bit integer. If you are content with those libraries, you should be content with a simple `std::mt19937{std::random_device{}()}`.


I don't think this is true for modern languages.

Rust seeds the whole state by default: https://docs.rs/rand/0.6.5/rand/

Julia seems to seed 128 bits: https://github.com/JuliaLang/julia/blob/5741c7f53c5ea443bbd7...

However, your statement seems to apply to old languages:

C# uses only 32 bits and the time for seeding: https://docs.microsoft.com/en-us/dotnet/api/system.random?vi...

Java only supports 48 bit seeds: https://docs.oracle.com/javase/8/docs/api/java/util/Random.h...


One shouldn't std::move in a return. Returning a local is already automatically an rvalue, however explicitly moving it disables copy elision.


You are completely right. That should be fixed


Mersenne Twister has a huge state, and if you have to use the MT (you need a very long period), you will probably want to be able to initialize it properly. For the common use case is a linear congruential generator which is initialized with just one integer.


Huh? You can't possibly need a period 2^19937-1.


The problem is there's no easy to use sensible defaults, just a confusing bunch of options with a bunch of apparently easy but subtely wrong ways to use it. Having the power is useful, but I would also just like a rand() (or better a randrange()) which actually works.


> it makes all the right decisions, imo, for serious use of prngs

Apart from an awkward API that's hard to use correctly, Mersenne Twister which is basically the main generator has been obsolete for years (bad quality RNs, slow, huge state, ...).


What's the modern C++ equivalent to C's

    (rand() % (b - a)) + a;
or Python's

    random.randint(a, b)
Easy to use and often good enough.


> (rand() % (b - a)) + a;

This is no longer uniform, because it introduces a bias towards small numbers.


Yes, it's less than ideal. But like I said, often good enough. Sometimes you just want a simple way to get something approximately random, the actual distribution might be unimportant.


What parts specifically? By my estimation, the only non-deprecated part of the standard library that really reeks of pre-C++11 (what I believe most consider the advent of "modern") is iostream. Most of e.g. the containers have been kept up to date with new features of the language (e.g. move semantics, constexpr).

The standard library certainly is lacking things which are commonly used (say, JSON parsing or database connection), but I think this is a conscious decision (and IMO the correct decision) to include only elements that have a somewhat settled, "obvious", lowest-common-denominator semantics. There's rhyme and reason to most of the most commonly used elements that is decidedly lacking from e.g. Python's (much more extensive) standard library.


> The standard library certainly is lacking things which are commonly used (say, JSON parsing or database connection),

I strongly disagree. It's quite obvious that the C++ standard library does not need to add support for "common things", because they already exist as third-party modules.

In fact, this obsession to add all sorts of cruft to the C++ standard is the reason we're having this discussion.

If there is no widely adopted JSON or DB library for C++ then who in their right mind would believe it would be a good idea to force one into the standard?

And don't get me started on the sheer lunacy of the proposal to add a GUI library. Talk about a brain-dead idea.

People working on other programming language stacks already learned this lesson a lot of time ago. There's the core language and there's the never-ending pile of third-party modules. Some are well-made and well thought-out, others aren't. That doesn't matter, because these can be replaced whenever anyone feels like it. This is not the case if a poorly thought-out component is added to an ISO standard.


Standard libraries shouldn't include "leaf" modules, but probably should include interface / adapter modules. So no to JSON, but maybe yes to a serde interface. No to a database driver, but maybe yes to an interface like JDBC.

Without common interfaces, flexibility in implementation is much more expensive, and innovation suffers too, as new things are harder to get off the ground without existing code that they can cheaply plug into.


There's an argument to be made for having basic so-called leaf modules in the standard library. That is it makes it far simpler to get a basic installation of C++ and start doing cool things with it. Experienced developers or people that need domain specific features would be using their own specialised libraries anyway.

So instead of trying to figure out which one of the dozens of GUI frameworks to use in making a window and have it change colour, you just write it using the standard library. You want to do a HTTP request, then there will be code in the standard library for that.

It will also save work trying to figure out which third party library to use when you want to do these things locally on a small test project.


> So instead of trying to figure out which one of the dozens of GUI frameworks to use in making a window and have it change colour, you just write it using the standard library.

Congrats, now you're stuck with something like Xwindows or MFC or AWT.


They're standard for the OS but not a standard for the programming language library.

Granted AWT wasn't great but you could still make a GUI with it straight out of the box. It allowed you to make windows and buttons and start exploring the programming language.

Like I said having a standard library option won't eliminate third party libraries, it will just provide something in the box for people to start using straight away.


Which while not ideal, are guaranteed to be present, contrary to third party libs.


That assertion is disingenuous at best.

That guarantee is only achievable at the expense of forcing compiler developers to maintain a GUI toolkig for all platforms. Who in their right mind believes that's reasonable or desirable?


Everyone that wants a language to thrive instead of dealing with thousand incompatible implementations.

Many C++ targets don't support IO or networking, so lets not burden embedded compiler developers with standard library bloat.


So instead of a GUI library what about a HTTP or a network library as part of the standard? Surely handling TCP and UDP connections is an order of magnitude easier to implement and maintain.


>I strongly disagree. It's quite obvious that the C++ standard library does not need to add support for "common things", because they already exist as third-party modules.

It's not obvious to me at all.

In fact, if that was a valid argument, it would be for C++ not having a standard library at all, as everything (including vectors, strings, etc) also exists as "third-party modules".


> In fact, if that was a valid argument, it would be for C++ not having a standard library at all

Putting aside the continuum fallacy, it's easy to understand how the C++ would be better served by having access to a collection of third-party components instead of repeating C's and even Java's mistakes.

The Boost project is a very good example, so as the wealth of JSON and XML parsers.

In fact, this lesson is so blatantly obvious that essentially all mainstream programming languages simply adopt official package managers and leave it to the community to develop and adop the components they prefer.


>Putting aside the continuum fallacy, it's easy to understand how the C++ would be better served by having access to a collection of third-party components instead of repeating C's and even Java's mistakes.

Java is very well served with its library. It would have been nowhere near as successful without it.


Third party modules would be a huge mess if there weren't at least common interface types like std::string_view and std::unique_ptr.


I believe that C++ needs a fat standard library because using third party libraries is a bit cumbersome in C++. Alternatively there could be a blessed build system that makes third party library integration as easy as Cargo or Go Modules.


I would say that CMake pretty much covers that. What it lacks is somehow a central registry, but I think C++ never intended to have one.


What C++ intended and what C++ should have intended are two different things.


> I strongly disagree

No you don't. Read the rest of the sentence you quoted :)


You're not disagreeing with colanderman:

> (and IMO the correct decision)


> include only elements that have a somewhat settled, "obvious", lowest-common-denominator semantics

Can you, from the top of your head, tell me what irregular modified cylindrical Bessel functions are and the last time you needed to use one? And yet, they were included in the standard library in C++17: https://en.cppreference.com/w/cpp/numeric/special_math/cyl_b...


I can't, but I bet they have a standard and well-accepted definition in the mathematical community.

In fact, pretty much any real-valued mathematical function passes the test.

The interface is settled, almost by definition since C++ functions are inspired by mathematical functions: pass in arguments, return result. Use range/domain exceptions or NaN for reporting such errors.

The semantics are obvious: compute the named function.

The interface is lowest-common-denominator: include float, double, and long double overloads.

In fact, the same or similar interface is used in almost every language I've encountered. To contrast, the same is absolutely not true of e.g. a database module. I don't think I've ever seen two alike, disagreeing over even basic things such as whether the cursor or the transaction is the basic unit of interaction.


There’s nearly a limitless amount of standard and well-defined functions with a single usage, like those. There’s hardly a point in implementing them in the standard library and C++ is the only language that I’m aware of that has those.

If the goal was to create a specialized library for solving differential equations, those would be handy there. But if not, even if you tried implementing everything that you could potentially think of to implement, there are hundreds of things that are orders of magnitude more useful to have and equally well-defined and standardized—even if we limit ourselves to mathematics alone, I’d much rather see basic constants like π or e included, or quaternions, or arbitrary precision integers, or decimal numbers… or dozens upon dozens of other things before that.

But mainly, I find it impossible to maintain the claim that any general-usage language, like C++, that implements such niche functions is trying to keep its standard library small and ‘include only elements that have a somewhat settled, "obvious", lowest-common-denominator semantics’.


Then you're not disagreeing with me, because those functions pass my test as I demonstrated above.

Why do these functions bother you so much? It can't be namespace pollution; they're under std::. It can't be that you disagree with their interface or semantics, since by your own admission you don't even know what they are.

You named some other features, such as quaternions, that you think would be better for implementors to spend their time on, but surely you can imagine someone like yourself who is tired of having to define the Bessel functions every time they start a new project, and can't imagine why the C++ committee saw fit to include something so useless and obscure as quaternions before getting to Bessel functions.


> Then you're not disagreeing with me, because those functions pass my test as I demonstrated above.

Yeah. I must have misunderstood your definition of ‘obvious’—I though you meant ‘an obvious inclusion to the standard library’, not ‘having an obvious definition’. The definition is obvious, why they should be in a standard library is not.

> […] since by your own admission you don't even know what they are

I mostly do—I studied mathematics. Or, to be more precise, I learned about them, then never used them in programming, had to remind myself what they were and even after that, I don’t find them useful enough to warrant an inclusion to the standard library. Thus, since they were included, I think that’s a good evidence of C++ committee not trying to keep its standard library concise.

> You named some other features, such as quaternions, that you think would be better for implementors to spend their time on, but surely you can imagine someone like yourself who is tired of having to define the Bessel functions every time they start a new project, and can't imagine why the C++ committee saw fit to include something so useless and obscure as quaternions before getting to Bessel functions.

The thing is, I can’t. If you use them, you want a better support for solving differential equations than C++ offers anyway, so it’s more of a ‘OK, I have this small part already implemented, but I still have to find ways of doing the rest 95%’. This, plus the fact that I’m quite certain that people using C++ to do 3D geometry outnumber people using it for solving differential equations by a few orders of magnitude—a cursory glance at GitHub showed me that the only projects in C++ that mention it are… implementations of a standard library (and forks upon forks of those).

My problems with this is that C++ is now in a very strange place—it implements some very high level, niche features, bloating the language and its implementations (the size of glibc is a practical problem) while still lacking many others, that seem much more ‘obvious’ (i.e. ‘if given an unknown language, I would be much less surprised to find them included in its standard library). In the end, I have a language that both has annoyingly big standard library and heavily relies on other, non-standard ones for quite a lot of things.


> C++ is the only language that I’m aware of that has those.

https://golang.org/pkg/math/#J0


Surely JSON does have a settled, obvious, lowest-common-denominator semantics?


Of the design of a parsing and encoding library? Not at all. Do you parse as a stream or all in one go? Are values represented as a special "json" type, or as built-in types? How should arrays and objects be represented? Are integers and reals different types? Are trailing zero decimals significant? Do you allow construction of arrays and objects in any order, or only sequentially?

(Granted, I've written my own C++ JSON library which I believe answers all these questions in an intuitive way, following both the design principles of the C++ standard library, and the lowest-common-denominator semantics of JSON, but it's sufficiently opinionated that I doubt I could convince any significant portion of C++ users that it's the "right" way to do things. Even if it "is", demonstrating such is nowhere near as easy as it is for unique_ptr, vector, string, thread, etc., each of which are more or less the "obvious" designs given certain constraints such as RAII to which the standard library adheres.)


I work in a shop where there was a significant effort in a cross-platform library a long time ago, but that old code has been showing cracks and emitting creaks ("Hey, folks, guess how many debugging hours it took to find out that lambdas didn't work here, either"). Use of the standard library is frowned upon except when absolutely necessary, so there's no avoiding the thing. From time to time someone will joust at it and pull a particularly screwball section forward a decade or two, but on the whole the old stuff is just never going away short of a catastrophe. It makes onboarding interesting, and it makes you reflect philosophically on expertise that is valuable absolutely no place else.

I work on other projects, or on my own stuff at home, and I can breathe again. I don't always need reverse iterators on a deque, but dammit they are there if I need them.

However, I have been in too much C runtime code to be entirely happy. I've seen too many super-complicated disasters, for instance the someone who really wanted to write the Great American OS Kernel but who wasn't allowed on the team, and so had to make their bid for greatness in stdio.h instead. You learned to tread carefully in that stuff, the only good news being that if you broke something it might have turned out to be already busted anyway and no harm done, philosophically speaking, I mean.

There are no good answers :-)


So the language is evolving, it is used by projects that are old and still in good enough shape so one can adapt their concepts to some new things, and as a sugar on top, it does not break bakcwards compatibility.

As such it just sounds like a mature technology which a huge adopted base and is still holding traction. Generally maturity, traction and adaptability can be considered indicators of health and not malady.

Beauty is overstated. Engineering can be art but it doesn't have to be.

Jokes aside, I use C++ daily and see it as Warty McWartface and could spend a long time ruminating about it's faults. But adapting old stuff to new boundaries is always going to be messy. Generally rewriting history creates more problems than solves them.


I don't see the problem. You are free to use such a modern library (Google does, it's called absl).

The good thing here is that the standard library doesn't require 'magic' to be implemented (unlike Swift where the standard library relies on hidden language hacks).


The difficulty here is combining multiple libraries each using its own abstractions.

For example, since the standard library does not have a Matrix class suitable for numerical applications (or maybe it does today...) using multiple libraries each with its own Matrix class is difficult. Multiple libraries are needed since one library may not contain all numerical algorithms one may require for a given app.

This is not a problem for Google where I assume everyone is using internally written code -- but is a problem for most of us.


> For example, since the standard library does not have a Matrix class suitable for numerical applications (or maybe it does today...) using multiple libraries each with its own Matrix class is difficult.

well, Python comes with a builtin "matrix-like" array type and yet it's not the one which is the most used in scientific computation.


Python provides the buffer interface, however (which `array` module implements), which links Python's buffers and memoryviews to numpy arrays to multiple other 3rd-party array-like and table-like types and structures.


.. because it's (relatively speaking) brand new?


Sure it's possible to use a non-standard "standard library". But at that point you're already halfway to using a different language so why not consider switching from C++ to D / Rust / Go?


The whole point of C++ is that it enables writing more powerful libraries, capturing semantics in libraries that can then just be used. C++ is still quite a lot more powerful for this purpose than Rust. Rust will get better at it, over time, but it has a long way to go and C++ is not siiting still.

Rust is still a niche language, and if its rates of adoption and improvement do not keep up, it will remain a niche language, and fade away like Ada.

I cannot imagine a serious programmer switching from C++ to Go. If you can, you have a much livelier imaginary life than I do.


The Ada partisans are all out in force here in this thread to defend Ada, all four of us. haha... For what it's worth, niche as Ada may be, it's an _important_ niche. It remains widespread in safety-critical applications, and isn't going anywhere anytime soon. It's really good to see Rust taking lessons from Ada/SPARK in the area of formal proofing! If any language is going to threaten C++, it looks like Rust. I don't expect an Ada resurgence to happen, unfortunately.

> I cannot imagine a serious programmer switching from C++ to Go. If you can, you have a much livelier imaginary life than I do.

This got a laugh out of me.


A large majority of Ada partisans found their corner in Java, C# and C++'s type system improvements over C, and made the best we could from the sour grapes of C's copy-paste compatibility.


Calling myself an Ada partisan is a bit of a stretch. I've recently begun using it for embedded development, which is a domain almost completely dominated by C. That's the angle I'm coming in from.


It depends on each one naturally.

For me, coming from Turbo Pascal 3 - 6, it allowed me in 1993 to use a language with a similar level of safety and language features, instead of having to deal with PDP-11 constraints.

I was always the weird kid that asked the professors if I could deliver C projects written in C++ instead, which thankfully some of them did accept.

Specially given that at my degree, C was already out of fashion by the early 90's. First year students got to learn Pascal and C++, and were expected to learn C from their introduction to C++ programming.


> Rust is still a niche language

Only just barely at this point. It has significant projects from a lot of the largest companies (Google, Microsoft, Amazon, etc). Firefox is using it, Dropbox is using it, Red Hat is using it.


In five years it might be just barely a niche language. In ten, if it catches on, it won't be.

If it does, its users will have come over from Java, C#, and C.


Is this the "no serious programmer" fallacy?


No true hn commenter would make such a mistake.


From what I can see, the whole point of C++ is to wrap existing C libraries and call them OO :-)


No, using a library is not halfway to using a different language.

Languages exist to allow you to define your own layers of abstractions. The language choice ideally reflects what abstractions are useful for your project.


> But at that point you're already halfway to using a different language

This statement makes no sense at all. Using a third-party library that's not specified by the same ISO standard that specifies the core languagr does not create "a different language".

It just means you're actually using the programming language to do stuff.

This isn't the case even if someone uses a toolkit that relies on preprocessor tricks to give the illusion of extending the core language, such as Qt.


C++ is designed for people to make nice libraries. Unlike other languages there is nothing special about the standard library (no magic language hacks). All libraries are first class citizens by design.


Good luck implementing something like std::is_standard_layout without "magic language hacks". No, not all libraries are made equal and std is part of the language now, there is no way back


You've cherry-picked a type trait as your example, which arguably could be a core language feature made to look like a third-party module.

Meanwhile, do you believe it's hard to implement a container?

And no, adding cruft to the STL is not a one-way street. See for example the history of C++'s smart pointers.


It is pretty hard to implement a container with all the precise invariants and guarantees that the Standard requires.

But more to the point, your implementation might still not be as fast as the standard library one, because the standard library can make assumptions about the compiler that you cannot in portable code - what is UB to you might be well-defined behavior to stdlib authors. Thus, for example, they might be able to use memcpy for containers of stdlib types that they know are safe to handle in that manner.


A look into the type_traits header reveals that is_standard_layout is implemented with standard C++.



It checks if a non standard feature is available, otherwise falls back to a standard implementation.

My argument was that it was possible to implement it with standard C++.


There's no way to implement that type trait using standard C++. The implementation does check if a non-standard feature is available, and if not it delegates to is_scalar which in turn delegates to is_enum which in turn delegates to is_union. is_union can not be implemented in a standard conforming manner without compiler support and libc++ unconditionally returns false if compiler support is not available, which does not conform to the standard.


Lack of tooling for HPC, GPGPU, mixed language graphical debugging across Java and .NET, native support for COM/UWP, game engines like Unreal, CryEngine and Unity.


Many libraries only expose C or C++ APIs. Some of these libraries are hard requirement, like OS kernel APIs or GPU APIs.

Insufficient SIMD support in other languages, Intel only supports their intrinsics in C and Fortran.

Tools for C++ are just too good, IDEs, debuggers, profilers.


What are the hidden language hacks in Swift?


I wouldn’t call them hacks, but there are things in the runtime that you can’t implement yourself in Swift. Examples (corrections welcome):

- you can’t allocate memory and then turn it into an Swift object.

- you can’t write Decodable in pure Swift (reflection isn’t powerful enough to do “set the field named “foo” in this structure to “bar”)

- reference counts are hidden from Swift code (yes, there’s swift_retainCount to read them, but that’s documented as returning a random number (https://github.com/apple/swift/blob/master/docs/Runtime.md) because it should not be used). So, if the compiler emits more reference count logic than needed in the data structure that your library uses, there’s no way to improve on it.


There are a lot more of these, I can't find a comprehensive list unfortunately.


What are those "hidden language hacks" in Swift?


Can you or others post such alternative standard libraries? The only ones that come to mind are boost (which is a nightmarefor compile times and I feel is a mishmash of old and new) and googles absiel which I haven't actually tried enough to make an opinion about.


> Can you or others post such alternative standard libraries?

POCO comes to mind.

https://pocoproject.org/


Also Qt is basically a Java-like library, with bulit-in stuff for networking, GUI, XML, JSON, WebSockets, multimedia, etc.

Or you have some "domain-specific" libraries like OpenFrameworks which is very nice if you are making visual art since it comes with a lot of very simple primitives to draw shapes, etc.


ACE was an attempt that is basically dead.

There was some talk about an std2, but I gather support for it is too low to be pursued seriously.


What legacy? It's not like there was a single "before time." There are problems coming up with all of it, because the underlying runtime model provides too few guarantees. We'll be plugging holes the rest of our natural lives.


No, the problem is NOT the underlying runtime model. In fact it's often the opposite: the STL tries to provide too much.

An excellent example is std::unordered_map. This type was introduced to address perf problems with std::map. But unordered_map forces closed addressing, separate allocation, etc. which limit its performance. In return you get stronger iterator invalidation guarantees but these are rarely useful. Meanwhile Abseil's swiss tables, LLVM's DenseMap, etc. illustrate what a high-performance C++ hash table could be.


This has been discussed extensively in the C++ community. I think if you need a very safe code, you shouldn't use the string_view or span without thinking about the potential consequences. These are added to the language to prevent memory allocation and data copy for performance critical software.

Herb Sutter has concrete proposals to address this issue and Clang already supports them: https://www.infoworld.com/article/3307522/revised-proposal-c...


> think if you need a very safe code, you shouldn't use the string_view or span without thinking about the potential consequences.

That’s the whole point: your caveat shows that’s it’s C/C++ which are unsafe in their very nature and therefore should not be used in code exposed to potentially malicious (e.g. user or network) input. Which is just about everything useful.

HPC are generally closed systems and have different threats, but the industry just needs to run (not walk) away from C/C++ for the majority of use cases.


There is no such language as C/C++. There is C, which cannot be written safely, and there is C++, which can be, and quite often is.

It has been many years since I shipped a memory bug in C++. It is just not a real worry for me. I am constantly dealing with design, specification, and logic flaws, which affect Rust equally, or moreso.

I am aware that there are plenty of other programmers out there, writing bad code in what they would call C++. I would like them to write good code. If it takes Rust to make them write good code, so be it. But if they began writing decent C++ code, that is just as good.

The threshold is not zero memory errors. The threshold is many fewer memory errors than logic or design errors. The more attention your language steals from logic and design, the more of those errors you will have. Such errors have equally dire consequences as memory errors, and are overwhelmingly more common in competent programmers' code, in C++ and in Rust.

C++ is (still) quite a substantially more expressive language than Rust, which is to say it can capture a lot more semantics in a library. Every time I use a powerful, well-tested library instead of coding logic by hand because it can't be captured in a library, that is another place errors have no opportunity to creep in.

So it's great that Rust makes some errors harder to make, but that is no grounds for acting holier-than-thou. Rust programmers have simply chosen to have many more of the other kinds of errors, instead.

Every programmer who switches from C to Rust makes a better world; likewise Java to Rust, or C# to Rust, or Go to Rust. Or, any of those to C++.

Switching from C++ to Rust, or Rust to C++, is of overwhelmingly less consequence, but the balance is still in C++'s favor because C++ still supports more powerful libraries.

You might disagree, but it is far from obvious that you are correct.


> It has been many years since I shipped a memory bug in C++. It is just not a real worry for me.

The whole comment sounds so much like well written satire, but I think he's being serious.


I agree with him. in many practical applications with well design class hierarchies it just really isn't much of an issue. Hasn't been for me either.


> with well design class hierarchies

:eyes:


you can roll eyes at me all you want, but I've been programming in C++ for a long time. These memory access issues just don't seem to be a big problem for us in practice. That's because we wrap all raw memory manipulation in appropriate classes for our application, so it's just not an issue. I agree it could be an issue in theory.


He rolls his eyes at "hierarchies". Libraries do make the difference.

Somebody else interjected Design Patterns. You can define a design pattern as a weakness in your language's ability to express a library function to do the job.


... and with proper use of Design Patterns!


Why is it difficult to believe? I've also written plenty of C++ code without memory bugs. It's not that hard if you play by a few simple rules.


> I've also written plenty of C++ code without memory bugs.

The classic response to this is "That you know of." Consider that even quality-conscious projects with careful code review like Chrome have issues like this use-after-free bug from time to time.

https://googleprojectzero.blogspot.com/2019/04/virtually-unl...

So when people claim that they personally don't write memory bugs I tend to assume that they are mistaken, and that the real truth is that they haven't yet noticed any of the memory bugs that they have written because they are too subtle or too rare to have noticed.


In this case, I mean a subsystem that has been in production since 2006 and has been processing hundreds of thousands of messages a day. I don't claim that it's perfect or bug-free, but if it had significant memory errors I'd have heard about it. I designed and implemented it to use patterns like RAII to manage memory, and it's worked quite well.


Chrome is in an exceptionally hard place because of its JIT. Your language cannot tell you if it's safe for your JIT to omit a bounds check.


That post describes two vulnerabilities: one is in the JIT, but the other one is in regular old C++ code. More generally, JIT bugs are a relatively small minority of browser vulnerabilities. More often you see issues like use-after-free in C++ code that interacts with JS, such as implementations of DOM interfaces, but the issues are not directly JIT related and would be avoided in a fully memory-safe language.


Chrome, like Firefox, is not an example of modern C++ code. Google's and Mozilla's coding standards enforce a late-'90s style. It is astonishing they get it to work at all.


That is why use tools like valgrind to verify that you got it right.


When I worked on a mobile C++ project at Google, we went exceptionally out of our way to avoid memory issues.

We ran under valgrind and multiple sanitizers (and continuously ran those with high coverage unit and integration tests). We ran fuzzers. We had strictly enforced style guides.

We still shipped multiple use after frees and ub-tripping behavior. I also saw multiple issues in other major libraries that we were building from source so it can't be pointed at as just incompetency on my team.

Basically, it might be possible but I think it's exceptionally more difficult to write memory safe C++ than this thread is making it sound.


Writing memory safe programs in C++ is possible. Most coding styles and some problem domains don't lend themselves to it naturally, though. In my experience, restricted subsets used for embedded software vastly reduce the risk of introducing errors and make actual errors easier to spot and fix.


> Writing memory safe programs in C++ is possible.

Everything "is possible" in the sense that in theory you can do it. But if time and time again people fail to do it. Even people who invest almost heroic levels of effort (see above: valgrind, multiple sanitizers, and so on) you get to the point where you have to accept that what is possible in theory doesn't work in practice.


I have seen it done in practice, on rather large systems. But it requires actual, slow software engineering instead of the freestyle coding processes that are used in most places.


My main rule is "no naked new," meaning that the only place the new operator is allowed is in a constructor, and the only place delete is allowed is in a destructor (unless there's some very special circumstance). This style lends itself to RAII. The other rule is to use the standard library containers unless there's a very good reason not to do so. That seems to cover most of the really basic errors.


Yes, I know how you code are obliged to code at Google. It is astonishing that anything works.

The "strictly enforced style guides" strictly enforce '90s coding habits.


Together with a test-suite that covers the exponential number of paths through your code...


Changing programming language neither reduces the need for test coverage nor does it magically increase coverage.


A type system changes the need for test coverage because it eliminates whole classes of bugs statically that would need an infinite amount of tests to eliminate dynamically.


That leaves an infinite amount logic bugs to be tested for. Types cannot fix interface misuse at integration and system level. So no, this does not reduce the need for testing.


Whether they reduce the need for testing overall is arguable. But what matters in this discussion is that types can guarantee memory safety, meaning that the cases that you forgot to test – and there will always be such cases, no matter how careful you are (just look at SQLite) – are less likely to be exploitable.


Types can only provide limited memory safety. There is a real need to deal with data structures that are so dynamic as to be essentially untyped. Granted, this usually happens in driver code for particularly interesting hardware, but it happens. Also, I have not yet seen a type system that is both memory safe and does not prohibit certain optimizations.


I haven't written c++ seriously for a number of years. Do you still have to do all that "rule of three" boilerplate stuff to use your classes with the STL? Is it better or worse now with move constructors?


It's a bit better with C++11 syntax where you can use = delete to remove the default constructors/destructors, e.g.:

  class Class
  {
      Class();
      Class(const Class&) = delete;
      Class& operator = (const Class&) = delete;
      ~Class() = default;
  };
Which I find slightly cleaner than the old approach of declaring them private and not defining an implementation, but the concept hasn't changed much. I'd love a way to say 'no, compiler, I'll define the constructors, operators, and destructors I want - no defaults' but that's not part of the standard.

Move constructors are an extra that, if I remember correctly, don't get a default version, thankfully.


So, so much better. Nowadays we "use" what has been called "rule of zero". Write a constructor if you maintain an invariant. Rely on library components and destructors for all else.


> https://jaxenter.com/security-vulnerabilities-languages-1570...

there's a world in terms of safety between C and C++.


The comparison in that link is pretty meaningless; it scores languages by how many vulnerabilities have been reported in code written in them, without even making an attempt to divide by the total amount of code written in them, let alone account for factors like importance/level of public attention, what role the code plays, bias in the dataset, etc.


To be fair the report explicitly states this limitation. jcelerier just conveniently forgot to mention it.


You're misrepresenting the report in order to justify your bias. Direct quote from the report:

    This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.`
In other words the report explains this with 1) there being more C code in volume and 2) more C code in security-relevant projects (which are reviewed more by security researchers). It also states explicitly that your conclusion is not to be drawn from this.


Readable version of the quote:

> This is not to say that C is less secure than the other languages. The high number of open source vulnerabilities in C can be explained by several factors. For starters, C has been in use for longer than any of the other languages we researched and has the highest volume of written code. It is also one of the languages behind major infrastructure like Open SSL and the Linux kernel. This winning combination of volume and centrality explains the high number of known open source vulnerabilities in C.

Please, never ever use code snippets for quotes, unless you hate mobile users. Just put "> " in front.


> unless you hate mobile users

or just period. I'm reading this on a 4K desktop display, and I still have to scroll. it's only useful for actual code, which is very rarely posted on hn.


> It has been many years since I shipped a memory bug in C++. It is just not a real worry for me.

Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it? Experienced C++ programmers do seem to learn how to avoid those bugs (although very often what they write is still undefined according to the standard - but e.g. multithreading bugs may be rare enough not to be encountered in practice). But that's of limited use as long as it's impossible for anyone else to look at a C++ codebase and confirm, at a glance, that that codebase does not contain memory bugs.

> C++ is (still) quite a substantially more expressive language than Rust, which is to say it can capture a lot more semantics in a library.

> So it's great that Rust makes some errors harder to make, but that is no grounds for acting holier-than-thou. Rust programmers have simply chosen to have many more of the other kinds of errors, instead.

Citation needed. What desirable constructions are impossible to express in Rust? I've no doubt that you can write some super-"clever" C++ that reuses the same pointer several different ways and can't be ported to Rust - but such code is not desirable in C++ either (at least not in codebases that more than one person is expected to use). Meanwhile Rust offers a lot of opportunities for libraries to express themselves clearly in a way that's not possible in C++: sum types let you express a very common return pattern much more clearly than you can ever do in C++. Being able to return functions makes libraries much more expressive. Standardised ownership annotations make correct library use very clear, and allow a compiler to automatically check that they're used correctly.

> Every programmer who switches from C to Rust makes a better world; likewise Java to Rust, or C# to Rust, or Go to Rust. Or, any of those to C++.

> Switching from C++ to Rust, or Rust to C++, is of overwhelmingly less consequence, but the balance is still in C++'s favor because C++ still supports more powerful libraries.

> You might disagree, but it is far from obvious that you are correct.

On the contrary, it's obvious from the frequency with which we see crashes and security flaws in C++ codebases that the average programmer who switches from Java to C++, or C# to C++ makes the world a worse place. It's overwhelmingly likely to be true for Rust to C++ as well.


> Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it?

Structure the code in a way such that it is obvious what happens. Use "semantic compression" (e.g. be clear about your concepts and factor them in free standing functions), but don't overabstract/overengineer.

Eliminate special cases. If the code has few branches and data dependendencies, then successful manual testing gives already high confidence that it will be pretty robust in production.

Prefer global allocations (buffers with the same lifetime as the process), not local state. This also makes for much clearer code, since it avoid heavy plumbing / indirections.

I tend to think that modern programming language features mostly enable us to stay longer with bad structure. And when you hit the next road block, fixing that will be correspondingly harder.


> Structure the code in a way such that it is obvious what happens. Use "semantic compression" (e.g. be clear about your concepts and factor them in free standing functions), but don't overabstract/overengineer.

This sounds little different from "write good code, don't write bad code." I'm sure we all agree on these things, but I'm sure the people who write terrible code weren't trying to be unclear or trying to overengineer.

> Eliminate special cases. If the code has few branches and data dependendencies, then successful manual testing gives already high confidence that it will be pretty robust in production.

True enough, but that's so much easier in a language with sum types.

> Prefer global allocations (buffers with the same lifetime as the process), not local state. This also makes for much clearer code, since it avoid heavy plumbing / indirections.

That's a pretty controversial viewpoint, since it makes composition impossible (indeed taken to its logical extreme this would mean never writing a library, whereas the grandparent was convinced that more use of libraries was the way to write good code).

> I tend to think that modern programming language features mostly enable us to stay longer with bad structure. And when you hit the next road block, fixing that will be correspondingly harder.

Interesting; that's the opposite of my experience. I find modern language features mostly guide us down the path that most of us already agreed was good programming style, enforcing things that were previously only rules of thumb (and that we had to resist the temptation to bend when things got tricky). And so the modern language forces you to solve problems properly rather than hacking a workaround, and the further you scale the more that will help you.


>> Eliminate special cases. [...] > True enough, but that's so much easier in a language with sum types.

These languages make it easier to have more special cases. There's a difference.

> That's a pretty controversial viewpoint, since it makes composition impossible (indeed taken to its logical extreme this would mean never writing a library, whereas the grandparent was convinced that more use of libraries was the way to write good code).

I don't see why that should be the case. Aside from the fact that composition/"reuse" is way overrated, libraries can always opt for process- or thread-wide global state. Another possibility would be to have global state per use (store pointer handles), and passing a pointer only to library API calls. The latter is also the most realistic case since most libraries take pointer handles. I absolutely have these handles stored in process global data. For example, Freetype handle, windowing handle, sound card handle, network socket handle, etc.

Also called "singleton" in OOP circles. Singletons are nothing but global data with nondeterminstic initialization order and superfluous syntax crap on top. Other than that, they are indeed good choices (as is global data) since lifetime management and data plumbing is a no-brainer.

> I find modern language features mostly guide us down the path that most of us already agreed was good programming style

But just the paragraph before you said you didn't agree with mine? In my opinion, OOP, or more specifically, lots of isolated allocations connected by pointers/references, make for hard to follow code since there is so much hiding and indirection even within the same project/maintenance boundaries without benefit. In any case I absolutely agree that this style is not doable in C. You need automated, static or dynamic (runtime) ownership tracking.


> I don't see why that should be the case.

At the most basic level, if project A makes use of library B and library C, then you want to be able to verify the behaviour of library B and library C independently and then make use of your conclusions when analysing project A. But if library B and library C use global state then you can't have any confidence that that will work. E.g. if both library B and library C use some other library D that has some global construct, then they will likely interfere with each other.

> Another possibility would be to have subproject-wide global state, and passing a pointer only to library API calls. The latter is also the most realistic case since most libraries take pointer handles.

At that point you're not using global state in the library, which was the point.

> you can always opt for process- or thread-wide global state

That doesn't solve the problem at all.

> Also called "singleton" in OOP circles. Singletons are nothing but global data with nondeterminstic initialization order and superfluous syntax crap on top.

Indeed, and they're seen as bad practice for the same reason as global state in general.


> At that point you're not using global state in the library, which was the point.

Yes. But I want to make clear that you are still using global state for all uses within the project itself. The library can be implemented in whatever way. For example, setting the pointer in a global variable on API entry ;-)

> That doesn't solve the problem at all.

WHICH problem? I don't think there is one.

> Indeed, and they're seen as bad practice for the same reason as global state in general.

This is foolish. There is no problem with global state. Global state is a fact of life. Your process has one address space. It has (probably) one server socket for listening to incoming request. It has (probably) one graphics window to show its state. Whenever you have more (e.g. file descriptors, memory mappings, ...), well then you have a set of that thing, but you have ONE set :-). And so on.

You are not writing a thousand pseudo-isolated programs. But ONE. One entity composed of a fixed number of parts (i.e. modules, code files) that work together to do what must be done.

Why add indirection? Why make it hard to iterate over all open file descriptors? Why thread a window handle through 15 layers of function calls when you have only one graphics window? It adds a lot of boilerplate. It even brings some people to invent hard to digest concepts like monads or objects just to make that terrible code manageable. It makes the code unclear. Someone once described it with this analogy, "I don't say ''I'm meeting one of my wives tonight'', unless I have more than one".


> Yes. But I want to make clear that you are still using global state for all uses within the project itself.

But if we believe in using libraries then often our project will itself be a library.

> The library can be implemented in whatever way. For example, setting the pointer in a global variable on API entry ;-)

And then you have the problem I mentioned: if there is a diamond dependency on your library then the thing using it will break.

> WHICH problem? I don't think there is one.

The problem of not being able to break down your project and understand it piecemeal.

> Global state is a fact of life. Your process has one address space. It has (probably) one server socket for listening to incoming request. It has (probably) one graphics window to show its state.

All those global things are a common source of bugs, as different pieces of the program make subtly different assumptions about them. Perhaps a certain amount of global state is unavoidable. That's not an argument against minimizing it.

> You are not writing a thousand pseudo-isolated programs. But ONE. One entity composed of a fixed number of parts (i.e. modules, code files) that work together to do what must be done.

If you write a program that can only be understood in its entirety, you'll be unable to maintain it once it becomes too big to fit in your head. Writing a thousand isolated functions gives you something much easier to understand and scale.


> The problem of not being able to break down your project and understand it piecemeal.

That's just incredibly untrue. It's FUD spread by OOP and FP zealots.

> All those global things are a common source of bugs, as different pieces of the program make subtly different assumptions about them.

Do you want to say that my logging routine is more complex because my windowing handle is stored in a globally accessible place?

> Perhaps a certain amount of global state is unavoidable. That's not an argument against minimizing it.

My advice is to make clear what the data means. Make it simple. Don't put a blanket over what's already hard to grasp.


> Do you want to say that my logging routine is more complex because my windowing handle is global data?

If your logging routine touches your windowing handle that certainly makes it more complex. If I'm meant to know that your logging routine doesn't touch your windowing handle, that's precisely the statement that it isn't global data.


It is global data, because it can (and should be) used without threading it through 155 functions.

In terms of the relational data model, it is global data because there is always one, and only one, of it.


> But if we believe in using libraries then often our project will itself be a library.

How about making the project good first? Let's try to get something done instead of theoretizing.


You mean start by building something that can be used and tested in isolation, rather than trying to build an enormous system in one go? Isn't that what you've been arguing against?


No I mean solve the problem "we need to build a program that does what it's required to do" (and no more) before trying to build a library that will cure diseases.


That's a total non sequitur. Libraries can, and usually should, be much smaller than applications.


Libraries are much harder than applications because they must work for a large number of applications with diverse requirements. They need to be more abstract, and therein lies the danger.

Regarding the size, clearly wrong. It depends a lot on the library. A windowing or font rastering library will be a lot larger than your typical application.

And for libraries that are much smaller than the application itself, why bother depending on them? (Anecdote, I heard the Excel team in the 90s had their own compiler).


At this point I'm really unsure whether this is trolling or not.


Just discussing. Why would it be trolling what I do and not what the other guy does?


>Can you write down the algorithm that you use to avoid writing memory bugs? Can you teach others how to do it?

Yes. Code using powerful libraries. Every use of a powerful library eliminates any number of every kind of bug.

Rust has not caught up to C++'s ability to code powerful libraries, and might never. C++ is a moving target. C++20 is more powerful than C++17, which was more powerful than 14, 11, 03.

There are certainly niches for less powerful languages. Rust is more powerful, and nicer to code in, than many that occupy those. It will completely displace Ada, for example.


> Yes. Code using powerful libraries. Every use of a powerful library eliminates any number of every kind of bug.

So if I find that a C++ project is using powerful libraries, I can be confident that it doesn't have memory errors? History suggests not.


If I find a Rust program that is (perforce) not using powerful libraries, can I be confident that it does not harbor grave errors?

Certainly not. Rust takes aim at memory errors, and misses the rest that would be avoided by encapsulating bug-prone code in libraries. C++ enables capturing bug-prone code in well-tested libraries, eliminating whole families of bugs, including, in my recent experience, memory bugs.

That is not to say all C++ code is bug-free. Google and Mozilla code, by corporate fiat, is forbidden to participate.


> If I find a Rust program that is (perforce) not using powerful libraries, can I be confident that it does not harbor grave errors?

You can be confident that it doesn't harbour memory errors. You can be confident that it doesn't contain arbitrary code execution bugs, which is a much better circumstance than with any C++ project I've seen (C++ by its nature turns almost any bug into a security bug).

IME you can also have a much higher level of confidence that it does what you expect (including not having bugs) than you would for a C++ project, because of Rust's more expressive type system.

> C++ enables capturing bug-prone code in well-tested libraries, eliminating whole families of bugs, including, in my recent experience, memory bugs.

And yet in practice you can neither be confident that there are no memory bugs, nor that there are no other bugs. Even the big name C++ libraries are riddled with major bugs. Perhaps libraries that are written in a certain fashion avoid this bugginess, but that's of little use when it's not possible to tell from a glance whether a given library is one of the buggy ones or not.


This is the classic False Dichotomy.

Rust programs have bugs. Rust programs have security bugs. Are they mediated by memory usage bugs? Probably not, unless the program has unsafe blocks, or uses libraries with unsafe blocks, or libraries that use libraries that have unsafe blocks, or call out to C libraries. Or tickle a compiler bug.

Can it leak my credentials to a network socket as a consequence of any of those bugs, memory or otherwise?

Putting your memory errors in unsafe blocks may make them invisible to you, but that does not make them go away.

So, yes, of course it can.


> Can it leak my credentials to a network socket as a consequence of any of those bugs, memory or otherwise?

Sure, that class of bugs still exists. But they're rarer and less damaging (even with stolen credentials, an attacker can't do as much damage as one who had arbitrary code execution).

Rust eliminates many classes of bugs. C++ does not: the fact that theoretically there could be non-buggy C++ libraries doesn't help you out in practice, because there's no way to distinguish those libraries from the very many buggy C++ libraries.

> Putting your memory errors in unsafe blocks may make them invisible to you, but that does not make them go away.

It's just the opposite: it makes the risk very visible, so in Rust you can choose to avoid libraries with unsafe. Whereas in C++ any library you might choose is likely to have memory safety bugs and therefore arbitrary code execution vulnerabilities.


Kind of true, AFAIK Rust binary libraries don't expose safety information, like it happens in ClearPath or .NET Assemblies.

Still too many libraries make use of unsafe when they could be fully written in safe Rust.


Rust cannot displace Ada until it fulfills the business and security requirements that keep Ada alive.


"which is just about everything useful". This statement is wildly without merit.

Sure, for the typical user facing application HN readers talk about then C++ can certainly contain vulnerabilities that are worrisome. Many performance critical applications can tolerate vulnerabilities in favor of latency.

It seems to me that the world of realtime systems including avionics, autonomous control software, trading, machine learning, and more is "not useful" as per your comment. The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

The industry has moved away from C++ for plenty of these user facing use cases. Codebases like Chrome and Firefox can't just be rewritten in Rust overnight. You can try and rewrite eg; SSL libraries but that has its own host of problems (eg; guaranteeing constant time operations).

I encourage the people parroting a move away from C++ to really think about what it is that should move and what the pros/cons are. I think you'll find that many of the things at risk (i.e user facing applications) are already on their way to being rewritten in Go/Rust.


> The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

Could you expand on this? It's a pretty strong claim.

LLVM produces very fast code and is very commonly used for c++ compilation. Rust also has access to the usual low level control suspects, inline asm, manual memory layout & operations, pointer shenanigans etc.

Benchmarks are never perfect but they show that rust in usually within the ballpark of c++, if not comparable: https://www.reddit.com/r/rust/comments/akluxx/rust_now_on_av....


> The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

I'm curious if perhaps you're using the word 'performance' here in a way I'm not familiar with, especially given the context of metaprogramming. As far as the usage of 'performance' that I'm familiar with, C++ and Rust come in at about even in benchmarksgame, which matches my experience. The optimization pass of Rust compilation is carried out by LLVM on LLVM IR, so it would be very surprising if it reliably underperformed compiled C++, especially given that the compiler has more freedom to optimize due to more extensive constraints on the language.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

Rust has vastly better metaprogramming, and as much low level control, no? And many low-level things are well-defined in Rust, and undefined behaviour or implementation-defined in C++.


Depends. Some metaprogramming features in C++ are currently ahead of Rust (values as generic parameters, generic associated types, constexpr, etc.), but Rust is ahead in other areas (procedural macros) and is working on parity in the other cases I mentioned. Meanwhile, Rust has none of C++'s legacy cruft, and its typed, trait-based generics are arguably a better foundation for metaprogramming than C++'s "dynamically typed at compile time" template system.


> The extreme low level control that C++ offers and powerful metaprogramming allows for performance that even Rust cannot hope to rival.

Rust is developed by mozilla because they needed a language they could write a faster browser in. The first rust component in mozilla was a CSS library they had attempted to parallalise twice in C++ (with some of the best C++ programmers) and failed. Rust treats 'can't be as fast as C' as a bug.


You mention control software as an example. What makes C++ better there? My guess is compiler options for more hardware targets, but is it something else also? Is it really C++ and not C that is most prevalent on embedded systems?


Well, it shows that there are aspects of C/C++ which are unsafe. But you don't have to use string_view or span, you know...


It was presented to show the "just use modern c++" counterargument to discussing the unsafety of c++ isn't a great argument. There are modern parts that are still unsafe.


Fair enough. But tatersolid seems to be condemning the entire language, which is a step too far for the evidence given.


40 years of security vulnerabilities in C and C++ code is plenty of evidence to condemn those languages as unfit for most purposes.

The evidence is overwhelming that it is not possible to write non-trivial C or C++ that is safe in the face of adversarial input. Microsoft, Google, Oracle, Linus, etc. have all tried for decades and failed miserably. All the resources and expertise in the world still results in unsafe software when C and C++ are used.


std::string_view is supposed to be idiomatic C++, though.


> needs to run (not walk) away from C/C++ for the majority of use cases.

I think we need to stop talking about C/C++ as if they are particularly related. My opinion about performance and C is I'll happily give up some of that for better security.


The thing is, Rust has tools that are easier to use _and_ have great performance _and_ prevent security and stability mistakes.


However Rust is single vendor and single implementation, has a much smaller community and ecosystem than C++, is not standardized, and does not support all of the platforms and use cases that C++ does.


All of those problems are long-term solved by using more Rust, whereas none of C++'s problems are long-term solved by using more C++.

(Personally, I don't find single-vendor or lack of standardization a problem in practice, and I've never written C++ for a platform Rust doesn't support.)


Both of them can be solved by giving it more time, but C++ is currently way ahead.


I'm not sure that's true. Giving C++ 30 years has resulted in the things identified in the article. (In particular, giving auto_ptr 20+ years hasn't resulted in anything that really fixes the problem.) It is not clear to me that it's moving in the right direction, so I don't think more time will help. C++ is definitely ahead in popularity but is neither ahead not obviously aimed in the right direction in safety.

Giving Rust about ten years has resulted in significant growth in popularity and tooling, including attempts to write new implementations of the language (e.g., mrustc), so given more time and in particular given more production users, it seems reasonable to expect it will figure all those things out.


I don't think that C++'s memory safety issues can be solved by giving it more time.


Except for stuff like "trusting trust", I find no need for "multiple vendors of Rust toolchains". It only comes handy when the language itself is not truly open source, and is in itself a form of a product.

Building on that " is not standardized," is not a problem, because one Open Source implementation is de facto the standard. Which I find much better than forever fixing your code, working around incompatibilities, bugs, etc. in compilers from different versions.

Which leaves "does not support all of the platforms and use cases that C++ does" which is indeed true.


Sometimes different vendors provide some benefits. For example Intel's C++ compiler produces (or used to produce?) much more efficient numerical code than either gcc or clang.

So for numerical applications C++ may make more sense than Rust. Rust does have the advantage of being based on an LLVM backend. So perhaps different vendors can compete by writing more efficient backends that are applicable to both C++ and Rust (but you probably lose some information when skipping the compiler front end)


> For example Intel's C++ compiler produces (or used to produce?) much more efficient numerical code than either gcc or clang.

I'm not an expert, but I believe that Intel could have implemented their hardware-specific optimizations in any other compiler framework (either gcc or clang). In this case multiple language implementations, while commercially viable, are not beneficial to all users.


Use cases and compiler options go hand in hand. Every implementation is a trade-off and different fields demand different trade-offs.


But a community and an ecosystem can be built over time (and are being built for Rust incredibly rapidly). Whereas a problematic language can't really be "fixed", it can only be added to.


Fair points, but none of them are inherent to the language itself.


The thing is....people don't run "the language itself".


That’s very true. But all of those communities, ecosystems, standards, and use cases have an extreme learning curve and a very deep problem with security. :-)


Rust's learning curve isn't exactly a shallow one either.

For the record I think Rust has a lot going for it, but it is not the C++ killer that many are touting it to be.


It's a bona fide C++ killer for applications that are both security and performance critical. It's already gaining traction for those applications even within relatively conservative engineering organisations.

That said, there are many performance-critical applications who are not security-critical, and in those I'd expect C/C++ to persist pretty much indefinitely. And many security-critical applications which are not performance-critical, and can perfectly well be served by garbage collected languages like Java/C#/Go.


Cargo is the problem for those organizations. People who worry about security and safety often develop on airgapped networks. You can go nostd for small stuff. For bigger stuff you could mirror crates.io but that isn't a well supported workflow and it's a lot o code from a lot of randos. The notion of a blessed subset would help get more buy in from that community. Even still, rustup isn't working on airgapped Dev nets and it's a nice feature especially if you are crosscompiling.


Cargo now supports airgapped use (no crates.io, no github) since the latest release.


Awesome! Can you provide some documentation to get me started? I have been unable to find any.


Thankfully Cargo is an optional component. We've replaced Cargo for internal use (all dependencies checked into the monorepo and compiled with Buck).


It is a legitimate C killer. C++, not so much.


people who liked C (and didn't like C++) are more likely to move to go. Rust has a healthy community of ex and current C++ programmers.


People who don't like GC will not move to Go.


I don't like GC (on an ideological level), but I still write most stuff in Go because I'm so insanely productive in it. Will probably use more Rust once async/await is there and mature enough.


C++ has a huge learning curve too though, the difference is it lets you write whatever you want. The learning curve is to write correct C++. It’s deceptive, it’s like skiing vs snowboarding. Skiing you pick up fast but to get good is damn hard and few bother. Snowboarding is damn hard to pick up but then it’s pretty easy to become really good.


then it’s pretty easy to become really good.

To modify this I'd say that becoming reasonably good is pretty easy (and I'd agree easier than skiing). To be become really good takes a long time and a lot of dedication and the difference in difficulty between skiing and snowboarding disappears. Same as with programming, some languages make it easier to go from 0 to your first app, some make it easier to write solid production ready code that earns you a paycheck, but becoming really good is always hard and independent of the language you're using.


I guess that is why I enjoy Snowboard and never bothered with skiing. :)


> Rust's learning curve isn't exactly a shallow one either.

I don't know how people can be so sure of this. We know essentially nothing about how to teach or learn Rust effectively, it's something that the community is just starting to look at. However, one thing we do know is that the detailed support that the Rust compiler provides to the novice programmer is quite simply unparalleled in other mainstream languages. It's basically the ultimate T.A.


I’m not sure I’d use as strong language as you (though I personally love the rust compiler’s messages), but I will say it’s gotta count that it doesn’t automatically and silently generate instance methods that explicitly break the memory model (cough rule of three…)


I am going to postulate here that a language standard which includes undefined behaviour is not really a standard.


Does rust have a structure to handle something like a stringview?


Yes, a borrowed string slice `&str`, whose lifetime is tracked precisely by the compiler to avoid use-after-free errors. https://doc.rust-lang.org/book/ch04-03-slices.html#string-sl...


That's one of the language's primitives: https://doc.rust-lang.org/std/primitive.str.html


std::auto_ptr was fixed one or two times before being replaced. It is a little bit unsettling to see newer features having the same sort of caveats, despite there being a lot of smart people planning the future of the language. I imagine this is due to the way that the existing features combine combinatorially to multiply the complexity of every new feature.


It is true, C++ has several warts some of them caused by the copy-paste compatibility with C.

Which is both a blessing and a curse. A blessing as it allowed us Pascal/Ada/Modula refugees never to deal with what was already outdated, unsafe language by the early 90's.

But also makes it relatively hard to write safe code when we cannot prevent team members, or third party libraries, to use Cisms on their code.

Regarding the alternatives, Swift is definitly not an option outside Apple platforms. And even there, Apple still focus on C++ for IO Kit, Metal and LLVM based tooling.

Rust, yes. Some day it might be, specially now with Google, Microsoft, Amazon, Dropbox,... adopting it across their stacks.

However for many of us it still doesn't cover the use cases we use C++ for, so it is not like I will impose myself, the team and customers, a productivity pain, take the double amount of time that it takes to write a COM component or native bindings in C++ for .NET consumption just to feel good.

When we get Visual Rust, with mixed mode debugging, Blend integration and a COM/UWP language projection for Rust, then yeah.


> It is true, C++ has several warts some of them caused by the copy-paste compatibility with C.

I mean that's a bit of a cop-out given C++ has more non-C warts and UBs than it has C warts and UBs at this point. It's not just "copy-paste compatibility with C" which made std::unique_ptr or std::optional deref and UB.


Sure it is, because they need to be compatible with C pointer semantics.

The large majority of C++ UB comes from compatibility with ISO C UB 200+ documented cases.

And ISO C++ working group is trying to reduce the amount of UB in ISO C++, which is exactly the opposite of ISO C 2X ongoing proposals.


> Sure it is, because they need to be compatible with C pointer semantics.

They don't need to be compatible with unsafe / UB C pointer semantics, allowing them to both contain garbage and be deref'able were explicit decisions the C++ committees did not have to make but chose to.


Some people prefer a Python 2/3 community schism, others prefer that tools actually get adopted in spite of a few transition flaws.


The C++ people are trying to refit ownership to the language without adding a borrow checker. This is painful. They've made it possible to write code that expresses ownership, but they can't catch all the places where the abstraction leaks.

string_view is really a non-mutable borrow. But the compiler does not know this.


Not quite true, Google and Microsoft are precisely adding a borrow checker to their static analysis tools.

https://herbsutter.com/2018/09/20/lifetime-profile-v1-0-post...


> but they can't catch all the places where the abstraction leaks.

Why does static analysis not work here?


It does, it's just a warning and not an error. And also experimental.

But it does exist, and does catch some of these errors. Example: https://godbolt.org/z/CZTfSx


I'd rather run diagnostics as a separate CI pass, so warnings work for me perfectly.


It's still strictly better than a language with no borrow checker and no way to express ownership (other than comments), like C, or C++ itself before all the smart pointers.


From the article:

> Dereferencing a nullptr gives a segfault (which is not a security issue, except in older kernels).

I know a lot of people make that assumption, and compilers used to work that way pretty reliably, but I'm pretty confident it's not true. With undefined behavior, anything is possible.


Linux hit a related situation: a harmless null pointer dereference was treated by GCC as a signal that a subsequent isnull test could not be true, causing the test to be optimized away. https://lwn.net/Articles/575563/


My opinion on that, is that such code MUST NOT be optimized away. Instead it should be a compile error.


You might wish for that, but the ship has sailed. Undefined behavior means that the implementation can do whatever it can. That said, I do expect tools, both sanitizers and static analyzers to improve to detect more of these kinds of cases.


The original intention of standardization was that compilers would gradually reach consensus on what the behaviour in certain cases should be, and once that happened the standard would be updated to standardize that behaivour. Compilers are allowed - indeed encouraged - to provide additional guarantees beyond the minimum defined in the standard (indeed part of the point of UB is that a compiler is allowed to specify what the behaviour is in that case).


Well, not exactly. There are things that are UB according to the standard but that particular compilers give an option to make defined: see `-fwrapv`, for example.


There have been static analyzers that will detect this for years. They report "check for null after use" or some such.


The problem, as far as I understand it (though I’m a layman), is that by the time the dead code optimization pass runs, the code has been transformed so much that there’s no obvious way for the compiler to tell the difference between “obvious programmer-intended null check that we shouldn’t optimize out” and “spurious dead code introduced by macro expansion” or (in C++) “by template instantiation”.


Couldn't user defined branches be tagged by such a compiler and if a tagged branch is eliminated the error generated with a reference to the tagged line in question?


That is a good idea and I’ll admit that I’m not sure why it isn’t implemented.


Why should it be a compile error? The pointer may be null, but is not guaranteed to be.

If you mean that C++ should require a null check before dereferencing any pointer that is not guaranteed to be non-null, then that would break most existing C++ code out there, so it's a non-starter.


in the particular situation they're talking about, you have a pointer to a struct, which you dereference by accessing one of its fields. the null check happens after the dereference, almost certainly a mistake.


Absolutely. In many experience if clang can deduce a function will definitely trigger UB such as definitely dereferencing a null pointer, it generally optimizes the entire function after the reference into a single ud2 instruction. (Which raises the #UD exception in the CPU).

This is something really hardwired into the C and C++ language. Even if the underlying operating system perfectly supports dereferencing null pointers, compilers will always treat them as undefined behavior. (In Linux root can mmap a page of memory at address 0, and certain linker options can cause the linker to place the text section starting at address 0 as well.)


The irony is it's mostly unsafe if you test for the null, such that the compiler can omit a test, but if there's no evidence the pointer can be null you just get a normal memory access. The optimizer is not optimized for most intuitive behavior.


The null checks are only optimized away if you've already derefenced the pointer before the null check within a scope. Optimizer rationale being youve already derefenced it, so it must not be null, therefore the null check is unnecessary.

Also, you can "safely" dereference nullptr, just so long as you dont attempt to actually access the memory. C++ references are nothing more than a fancy pointer with syntactic sugar.

For example: int* foo = nullptr; int& bar = *foo; // no blow up std::cout << bar << std::end; // blowup here

My personal $0.02 is that the C++ standard falls short with language like "undefined/unspecified behavior, no diagnostic required." A lot of problems could be prevented if diagnostics (read: warnings) were required, assuming devs pay attention to the warnings, which doesnt always happen. For example: Google ProtoBuf has chosen to ignore at their own and clients' peril potential over/underflow errors and vulnerabilities by ignoring signed/unsigned comparison warnings.


Dereferencing a null pointer to convert it to a reference causes undefined behavior, there's nothing safe about it!

"Note: in particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by dereferencing a null pointer, which causes undefined behavior."


UB isn't "safe" so I'm unsure what your comment is getting at


I guess the point I was trying to make is that what is referred to colloquially as dereferencing is different than how the compiler sees it. We see "foo" (can't get HN to emit the asterisk for pointer dereference here, no matter what I try ), and we know that to be UB, but the compiler doesnt really see it until the load. Until its actually used, its effectively a dead store and will be eliminated, anyway.

    int& bar = *foo;
Doesnt actually deference foo. No load is issued from the address stored by foo. Until you either load or store using bar, no null dereference has occurred.

Further if bar is never used, no actual dereference has occurred. In fact, there will be no assembly instructions emitted for the above statement because it is pure syntactic sugar. Pointers and references in C++ are the same, except with different syntax and the secret club handshake that references are presumed to never be null (but there are ways they can become null and thus the UB).

Edit: formatting, at least attempted


The problem is that we don't know what the compiler might think..

If I write something along the lines of

  int& bar = *foo

  if(!foo) {
    // do something
  }

The compiler very well might (and would be perfectly within its rights to) completely eliminate everything inside of the if(!foo) since it can assume the pointer is non-null because it is being dereferenced.


This is very definitely false. That is totally UB, launch-the-missiles stuff. Check your references before you repeat this silliness.


Definitely not true. Consider an IoT device without an MMU.


Most of the ones of those I am familiar with had 0 as a non-writable address, so you'd still crash. [Edit: Though that's probably hardware specific, and the hardware was usually custom.] It might be called "bus error" or some such instead of "segfault", but it was pretty much the same behavior.


Plenty of microcontrollers have a vector table at address 0. Best place to start injecting code.


Sure. The 68000 series did. But address 0 held the starting program counter, and address 4 held the starting stack pointer (or vice versa - it's been a while). Those two were usually mapped to ROM, because they had to have the right values even on cold boot. But that also meant that they weren't writable. So if you had a null pointer, you could read through it, but an attempt to write through it would give you a bus error.


I really don't get all the hate that C++ gets. The suggested alternatives in the article are Rust and Swift. What if you need to develop a cross platform GUI, that has a backend running a CUDA or OpenCL algorithm? For the former, you can use Qt, which isn't without it's warts, but is pretty tried and true in my experience (see KDE, VTK, etc). For the latter, you'll end up writing your CUDA code in C++ anyways. I guess you could go the route of writing bindings, but that is not without additional effort. Not that it won't happen for Rust, but C++ also has tooling suited for enterprise use that are largely unmatched in other languages (Visual Studio, Qt, etc). Sandboxing, static analysis, and fuzzing tools are also mostly built for C/C++ codebases. It's also an ISO standard language which makes it both a language full of warts due to decision by committee, but also a candidate for a stable, long-lasting language that will outlive many of us. (Try finding an ISO Standard Language you don't hate).

Either way, C++ is certainly not for every project, but the articles scattered around the web claiming it should be superseded by Rust are plentiful. These opinion pieces make no attempt to credit C++ for when it does make sense to use. Despite it's quirks, it is still the most optimal way to program HPC applications or cross platform GUIs that are not Electron based. The security tools around it and the fact that it's an ISO standard language make it a solid choice for many enterprises.


I do not think it helps to think in emotional terms such as 'hate'. There is nothing wrong with discussing potential problems, and the current utility of the language should not stop us asking whether we could do better in future.

FWIW, I use C++, not Rust or Swift, and I have a fair amount of knowledge and experience vested in it, but I think these questions are worth asking.


> I do not think it helps to think in emotional terms such as 'hate'

I think 'hate' really represent the mind of some people (even if they are a minority) but even if we ignore this extreme, the level of irrationality in technical discussions is generally quite high. You need to have rational people to have a rational discussion. The sad reality is that a lot of technical discussions are only superficially rational and are often a political play to assert superiority on other people (it's true for languages, frameworks, code editors, methodologies, etc ... ).


The questions are worth asking. But the Rust crowd is not asking questions, they're dictating solutions, or rather that one old solution of rewriting everything to Rust.

Meanwhile the Firefox rewrite, the premium example of what they propose is still plodding along and Mozilla PR blogs aside, Firefox is still plugging vulnerabilities in each release and will be for the foreseeable future.

Now let's look at the Swift community... do we have blog posts from them every week about how awesome Swift is and why one should rewrite their working C and C++ code in Swift? No, they keep doing their thing, Swift is becoming better at cross platform, it's also getting some support for machine learning.

That's how one grows a language, through building successful projects, staying positive (and having an entire platform behind it). Not through doomsday scenarios and a constant barrage of criticism.


> That's how one grows a language, through building successful projects, staying positive (and having an entire platform behind it). Not through doomsday scenarios and a constant barrage of criticism.

This is exactly what the Rust community is doing! RIIR is something that's only really insisted on for relatively small pieces of security-critical code. With huge codebases like Firefox the rewrite is done piecemeal, to put the rewritten code in use as quickly as possible. The "doomsday scenario" talk about memory-unsafe languages does not come from people writing Rust, it mostly comes from the security community, even at places like Microsoft - because guess what, they've literally been running around with their hair on fire for decades, and they're sick of this especially now that something like Rust is available!


Saying that C++ won't "save" us is already pretty emotional and, put simply, wrong. We are not really facing imminent doom or anything that would justify that word, other than an overemotional point of view, biased by personal feelings.


C++ does have its positives, as you mentioned, but those positives do not make its negatives go away, nor having negatives means that there aren't positives. You can dislike some parts of the language while still using it for its positive aspect - that doesn't mean the negative parts do not exist nor mentioning them means that there are no positives.


Once again, this is not merely mentioning negatives, it's just more submarine advertising for Rust.


Rust is the only serious attempt to fix those negatives while remaining in the same niche, so bringing it up in this context is natural.

And C++ can't really truly fix them without breaking backwards compatibility with all the legacy C++ and C code, which is its main selling point.


It's the only "serious" attempt as declared by whom exactly, the committee of serious attempts?

There are other serious attempts (D, Swift, Go) which the Rust community likes to dismiss for various reasons, but at least two of them are currently much more successful than Rust. They don't have to be 100% in the same niche to take a bite of marketshare.

Even if C++ breaks backwards compatibility in some ways, it will still have better backwards compatibility to itself and C than Rust or any other language. This break could be something as radical as a C++ "unsafe", or it could be clang's -Wlifetimes, or something else. Credit's due to Rust here for pushing some parts of the C++ community to search for solutions.


I do not dispute that the languages that you've listed are serious attempts. They do not remain in the same niche, however. I would define that niche as "capable of replacing C even in free-standing implementation".

For D and Go, having a GC immediately puts them outside of that niche. For Go, I would also add all the FFI weirdness due to its weird stack discipline, which means that it is non-zero-overhead when interacting with non-Go code - a fatal omission for any contender for a low-level systems language.

Swift is much closer to the metal, and I would consider it a serious contender if it was pushed on all platforms. But it seems that Apple is not interested in its use outside of their ecosystem, which constrains its effective niche to be much narrower than C++ or Rust, ironically.

And yes, of course C++ is always going to have better backwards compatibility. If it didn't, it wouldn't be C++. But its ability to fix issues is directly correlated with that compatibility - it's a dial where you can have more of one and less of the other, as you choose, but you can't have both. Rust (and Swift) can fix more problems, or can fix problems in better ways, because they are not so constrained.

Conversely, if C++ were to introduce safe-by-default, and require explicit opt-in into unsafe - with all present code being considered unsafe - then what you have is a new language that just happens to embed C++ for compatibility reasons. At that point you might as well fix the syntax warts etc as well in that new safe language, since it breaks everything anyway.


While there are obviously still cases where C++ makes sense to use today, those case are overwhelmingly based on the age and maturity of the C++ ecosystem. Now that Rust has proven that a language can provide memory safety without compromising (much) on performance, it is clear that the scope of C++'s supremacy is in permanent decline.

As Rust (or another language with similar safety/performance properties) matures and its ecosystem grows, C++ will increasingly become a language of tiny niches and legacy codebases.

In other words: C++ is the new Fortran.


> In other words: C++ is the new Fortran.

Which makes Rust the new... APL?

I think the analogy is pretty apt as far as it goes. Fortran by the 70's was a crufty language with a bunch of legacy mistakes that remained very popular and very useful and would continue to see active use for decades to come.

And everyone knew that. And everyone had their own idea about the great new language that was "clearly" going to replace Fortran. And pretty much everyone was wrong. The language that did (C) was one no one saw coming and frankly one that didn't even try to fix a lot of the stuff that everyone was sure was broken.

For myself, I despair that Rust has already jumped the proverbial shark. It's complexity is just too severe, the only people who really love Rust are the people writing Rust libraries and toolchains, and not the workaday hackers who are needed to turn it into a truly successful platform.


It’s definitely easier to reason about than C++ because it errs on the side of safety and explicitness. You can use things you don’t understand without fear which straddles the boundary in a good way IMO. To your point that doesn’t make it simple.

As a work-a-day hacker it’s completely become my go to language when I’m writing tools, libraries or just want to knock out a simple algorithm to prove myself right or wrong.


> It’s definitely easier to reason about than C++

See... I don't think that's true, and argue the huge body of C++ code and talent in the ecosystem is an existence proof to the contrary.

I mean, sure, C++ has its crazy edge cases and its odd notions. But you don't need to understand the vagaries of undefined behavior, or the RVO, or move semantics to write and deploy perfectly sensible code. Literally hundreds of thousands of people are doing this every day.

Now, that may not be a convincing argument about the value of that code. But it's absolutely an argument about the utility of the language in aggregate.

I'll be frank: probably 40% of professional C++ programmers aren't going to be able to pick up Rust and be productive in it, at all. And at the end of the day a language for The Elite isn't really going to mean much. We've had plenty of those. Rust is the new APL, like I said.


I'm not really arguing about the value of that code either, just that it's probably wrong, probably trivially breakable due to the sheer mountain of complexity underlying it. The compiler just happened to let it through because it can't help you. The language doesn't give it enough information to do so effectively.

Just off the top of my head, std::move doesn't... move [1]. It just returns an, I kid you not "static_cast<typename remove_reference<T>::type&&>(t)" without doing... anything. You can keep on using the old value probably, silently, until out of the blue, it stops working one day. Then you're super, duper sad. Even modern language features are, I don't want to say lies, but "hopes and dreams" the compiler can't enforce. It's like if you really wished C++ had Rust's features, but you can't without breaking things, so you give it your best shot, which ends up just creating yet more complexity.

Rust's answer is...

  let x = Value::new();
  let y = x;
  let z = x; (COMPILER ERROR: X GOT MOVED INTO Y)
C++'s modern features are to Rust's equivalents what the ruined fresco [2] was to the original. If you stand far enough back, it's basically right. If you get up close it's hilariously and trivially broken.

It takes a lot of gymnastics to call this language approachable or understandable. It's basically a coal powered car made of foot-guns. That doesn't mean it's not a car, or that it won't get you where you're trying to go, I'm just saying it's an open question how many pieces you'll arrive in.

[1] http://yacoder.guru/blog/2015/03/14/cpp-curiosities-std-move...

[2] https://www.npr.org/sections/thetwo-way/2012/09/20/161466361...


> Just off the top of my head, std::move doesn't... move [1].

I mean that makes sense in a (somewhat nonsensical) way, std::move is a marker for "you can move this thing if you want".

The much weirder part is that even if a value is moved it's not moved, it's carved out, you get to keep a shell value in a "valid but unspecified state". Reusing that value (which the compiler won't prevent) may or may not be UB depending on the exact nature of the operation and state.

Oh and of course that a change / override to the caller and recompile can change the behaviour of the callsite entirely (e.g. a previously moved value is not moved anymore, or the other way around) but that's pretty common for C++.


I'm curious, what has given you the impression that Rust is a language "for the elite"? It certainly has some rough edges around learnability, but I don't think anyone is actively trying to discourage people from picking it up. Rust is certainly hard for experienced programmers because some common patterns in other languages are not allowed by Rust's rules, but that's no different than trying to apply OOP in a functional language.


> But you don't need to understand the vagaries of undefined behavior, or the RVO, or move semantics to write and deploy perfectly sensible code.

Don't you kind of have to though? These invariants are in your code no matter what, in the case of Rust they are checked by the compiler (you don't necessarily have to understand every nuance, because it's checked for you), in C++ they aren't checked and are a potential bomb waiting to go off.


Claiming that Rust is a language for 'The Elite' is amusing in light of the recent Rust website redesign, with the following headline[1]:

> Empowering everyone to build reliable and efficient software.

The language is entirely about inclusion, empowerment, and removing the fear of systems development.

[1]: https://www.rust-lang.org/


Perception always lags reality, and it definitely was hard to learn when I picked it up 3-4 years ago. Things have improved so much since then, with non-lexical lifetimes in R2018, better compiler errors, stdlib standardization, RLS + VSCode, etc.


"Rust is the new APL,like I said"

I know you're trying to compare it to APL as that language mostly died off and is thus obscure, but I think the analogy is a little off.

While APL is weird, it is actually really easy for me (someone with less than 15 hours playing with the language in total over the past few years) to code up some basic scripts a lot easier than something like C++.

I'm being absolutely serious too. C++ is pretty low level and as a Python coder I feel like I'm sinking in quicksand with everything required to do something simple. APL is basically built around passing arrays of numbers or strings to weird symbols that operate on the whole array. This means I can do text processing with only a few symbols and a library function (and all interactively) where C++ requires lots of boilerplate and debuggers and compilation and pointers. In short, APL seems to be a lot less complicated than both Rust and C++ in my opinion and most using it have very little formal programming experience and have no problem picking it up from what I've read.

I know what you were essentially trying to say though.


I picked APL because it matches the "WTF cray cray" aesthetic that Rust's syntax presents to new users. I can see an argument that Ada is the right analogy if you're going for pure complexity.

And yes, Fortran, APL, Ada (also Modula-2 & Oberon, Smalltalk and a bunch of other forgotten languages presented as the Next Big Thing at the time) are all uniformly simpler than either C++ or Rust. The modern world is a more complicated place and programming tools have kept up.


> the huge body of C++ code and talent in the ecosystem is an existence proof to the contrary

Looks to me more like proof that C++ has been around a long time


Lots of languages have been around a long time without attracting billions of lines of code. To get that, the language must be unusually useful.


That's a powerful endorsement of COBOL, but a lot of language success is “when did it become common”, “who was sponsoring it or what libraries did it come bundled with”, and path dependence that makes momentarily (due to transient conditions) sensible choices into standards that are mandatory for decades.


I don't think anybody is denying that C++ was (and is) unusually useful, simply by virtue of being the only serious game in town when you need that whole "don't pay for what you don't use" thing, and general performance stemming from that. And devising a better replacement that retains that feature is hard, which is why C++ had so much time to entrench.

But it doesn't mean that we can't do it better these days.


> I'll be frank: probably 40% of professional C++ programmers aren't going to be able to pick up Rust and be productive in it, at all.

Well, a certain number of professional programmers are past the point of being willing to learn new technologies, so maybe that's true. But close to 100% of the people who might become professional C++ programmers could instead become professional Rust programmers.


happy workaday Rust hacker here. Coming from higher level languages, the semantics make much more sense to me, after a quite harsh learning curve and some un-learning. Non lexical lifetimes is a game changer for rust learnability I think, and I more and more fail to see usecases where I can't just use it.


If C++ is the new Fortran, Rust might very well be the new Ada. Many of the same relative merits were claimed for Ada as for Rust, and it had the backing of the biggest and best-funded organization in the world, but it faded from view because it did not keep up.

Rust could easily go the same way.


Given the market size for Fortran and Ada, that doesn't bound well for Rust.


Yeah, agreed. The points in the article are valid, but quirks you learn and get past the first time. I still shoot myself in the foot sometimes even though I don't have a single bare new/malloc without a shared/unique ptr! But that's C++ for you.

But, C/C++ is the best option for us for high-performance network processing. We're dabbling with Rust for small applications where we would use Python previously and it's working pretty well -- but there's no way we could use Rust for the core application yet. Modern C++ has really grown on me and it's sometimes a love/hate relationship but totally a huge improvement over ancient C++ or C.


> The points in the article are valid, but quirks you learn and get past the first time. I still shoot myself in the foot sometimes even though I don't have a single bare new/malloc without a shared/unique ptr! But that's C++ for you.

I think the article maybe doesn't do enough to outline the full extent of the problem by focussing on a few counterintuitive cases that are present in C++17, because you're right, all of those cases in the article are ones that can be learned and remembered without issue. The real problem, as I see it, is actually that the core language semantics mean that there's no foreseeable end to the foot-shooting treadmill. Since the language is fundamentally permissive of such things, it's likely that further spec revisions will introduce abstractions like string_view that are easy to use unsafely, aren't flagged by static analysis tools, and end up in security-critical code.

Because this feels like a necessary disclaimer, I don't think that fact justifies migrating every active C++ codebase out there to Rust or anything, since pragmatically speaking there are a lot more factors beyond just core language semantics that go into evaluating the best choice of implementation language. I guess my takeaway is neatly expressed by the post title: there's a sense that I get from C++ users (granted, maybe only naive ones) that sticking to the features and abstractions introduced in C++11/14/17/etc basically eliminates all of the potholes of old, and it's evident that that's not true and will probably continue to be not true.


> but there's no way we could use Rust for the core application yet

I'd love to hear more information on this! In my mental model, you could just use Go to replace your Python utilities, but Rust might be workable for your core (or at least its designers would like it to be and would like to know why it isn't).


We are Rust noobs :-). The Python utilities are random tools and daemons, so it's been a nice experience getting my feet wet in a completely different paradigm with rustc.

So the biggest challenge with moving to another language is reproducing the same low latency and high performance we've carefully designed in C++ to a Rust analogue... which given we haven't really used Rust enough yet to have a total sense of this, is hard.

In the long future, I can see Rust working in gradual stages -- but of course focusing on biz objectives is better first when we know how to write high perf C++, hence spending time to play with Rust on side tooling or other smaller projects.

From what little Rust I've written so far I really do like it, so hoping that I can incorporate it more


>Not that it won't happen for Rust, but C++ also has tooling suited for enterprise use that are largely unmatched in other languages

Hopefully that stuff will be helped with things like Language Server Protocol and Debug Adapter Protocol.


But most of the things you just listed are just aspects of the existing ecosystem (libraries, tooling, etc.). There's no doubt C++ has an incredibly large ecosystem and will therefore be around for quite a while to come, but that doesn't make it a good language, it just makes it one that happens to have been very popular for a very long time. Our industry is one that values progress over tradition in the long run. I think C++ has entered its twilight years. That could mean five, ten, or twenty, but I think it's peaked, and I don't think that's a bad thing.


The point is that I've noticed a broad "religious" trend where those promoting Rust don't lend any credit to the places where C/C++ has valid strengths, even if due to it's legacy. It doesn't do a great service to either community to constantly pit the two against each other, and to misrepresent the other in a way that's not honest. C++ doesn't exist and continue to evolve just because it's been around forever, there are a number of things leading to it's continued use that should be brought into the discussions.

C++ isn't going anywhere. In 20 years you may not be writing in it, but you'll still be calling into it somewhere in the software stack (especially if things continue moving the WebAssembly direction).

Even if you're using Python's SciPy today, you're calling into LAPACK written into Fortran.


Case in point, most modern OS GUIs are written in managed languages nowadays, even Qt has JavaScript bindings now.

Yet, C++ is still there as the binding layer between UI and GPGPU.


That doesn't make C++ a good language. It just is one.

Rust is also a good language. Trash-talking C++ does no one any good.

Overwhelmingly, the substantial gains to be made are moving people off of C. Every other possible benefit is a rounding error. It is still much easier to get people to C++. Once dislodged, they might continue on to Rust, or br seduced by C++'s greater expressive power and more powerful libraries. Either way the world will be better.

C++ today seems weighed down by legacy cruft, compared to Rust, but Rust is rapidly accumulating its own legacy cruft. By the time it is mature it will have easily as much of its own.


I started came from a C background (mostly for embedded stuff, where on many platforms a C++ compiler wasn't an option) - and eventually moving to C++ for backend and library work was a pain in the ass. I got the hang of it, but never loved it, although for the goals we had to achieve, it was a sensible choice. I however moved into more of an ops/SRE role mainly due to to C++ not entirely being my thing while the dev that was always most in touch with the ops side of thing.

I do see many of the advantages of both Rust and C++, but one of the reasons I like C is its relative simplicity. I only did a single small thing in Rust to try it out, and while still quite complex, at least it felt more manageable than C++ once you understood the borrow checker. The big elephant in the room however is Go. Every time I started something and would consider Rust, it ended up being 'why not just Go?'.

At least for me, it was much better suited language to be moving to coming from a C background. The biggest initial hurdle was setting up the dev environment with the completely backward GOPATH and GODIR environment variables - which just feels absolutely wrong (although this now in the process of being addressed). The language itself however was an absolute breeze, I felt right at home. Simple, quick, straight-forward, with tons of libraries and tools for my current field of work, coupled with performance more than acceptable for 99% of the applications I need and static binaries which are easily deployed anywhere, also eliminating a ton of complexity. Is it perfect? No, but what language is? But if you want to convince C-programmers to ditch C for a memory-safe language - Go is imho in many cases a much better option to move to.


"or be seduced by C++'s greater expressive power"

There's a deeper debate lying at the heart of the Rust-vs-C++ conversation (it's the same one at the heart of Haskell-vs-Lisp), which is really about expressive freedom vs. the strategic usage of constraints. That debate will, truly, outlive all of us. You can probably guess which side I'm biased towards; I won't lay it all out here.


This is not about "expressive freedom" vs "constraints". Rust lacks many of C++'s key core language facilities to capture semantics in a library. As a consequence, you cannot write powerful libraries in Rust that you can in C++, and you cannot use powerful libraries such as are written in C++.

Since you cannot use these powerful libraries, you are (if you like) "constrained" to write fragile code at what would have been the call site.

Each use of a powerful library eliminates all the bugs that would have come from not using one. Those are bugs that Rust designers have elected to keep, in exchange for the memory-use bugs that we largely eliminate, in C++ code, by reliance on powerful libraries.

Powerful libraries eliminate many, many more bugs besides memory misuse.


Can you give an example of said "key core language facilities" that make some library implementable in C++, but not Rust?

I can give an example of the opposite: language-aware macros. No amount of C++ TMP hackery can approach a well-designed Rust DSL.


Rust macros understand types now? Woohoo!


Can you clarify what you mean by "understand types"? The only thing I can think of in this context is compile-time reflection - but that's not in C++, either (yet).

Ideally, can you give a concrete example of some abstraction that can be implemented in C++ with templates, but not in Rust with generics and/or macros?


"Rust lacks many of C++'s key core language facilities to capture semantics in a library."

Can you be more specific? I don't even know what you mean by "powerful library". If you mean a library that's been developed and debugged for a long time, then sure, C++ currently has the advantage there, but that's a transient state and nonspecific to the language itself. If you mean a library that does wild, earth-moving things then I would call that a liability, not an advantage. The language features that allow such things are sources of bugs that Rust designers have elected not to introduce in the first place.

The question of, "In April 2019, which language and ecosystem are more reliable?" is a perfectly valid one. Stronger language semantics vs decades of library refinement. It's not at all obvious. But the answer to "Is Rust or C++ a better language in the long run?" seems clear to me.


Could you write a Rust equivalent of the STL, or its modern cousin Ranges? That's just one library (as of C++20), but if Rust is not up to that, there's no point in going further.


The equivalent of Ranges is already in Rust’s standard library, and has been forever. We call it Iterator. It also provides extra static guarantees against invalidation.


I see that you did not understand my question.

I see, too, that there are plans for some support for generics in the near future. So the answer might become yes, in time.


Rust has also had generics for a very long time.


Qt has Rust bindings now. I hope CUDA will get replaced with proper cross GPU alternatives anyway. Rust GPU programming should be also possible.


The issue with bindings is that you either commit to maintaining them yourself, or you rely on someone else. In this case, the Rust Qt bindings I found [0] were generated for Qt 5.8, which was released nearly 3 years ago. The tests on Github report "failing".

Then, you have the cognitive overhead of translating the documentation, and other sample code. In my experience with bindings, this ends up requiring knowledge of the language you're binding to. It seems easier to just write it in the native language instead and deal with those quirks rather than bindings quirks.

Do the Rust bindings show the Qt docs in the autocomplete? If there's not input validation on the binding side, then you'll end up in C++ again figuring out how to sort things out.

Regarding CUDA, I think we're all hoping for a cross GPU alternative. There's OpenCL, Sycl, ROCm, Kokkos but their API is also written in (you guessed it) C++. Need to render to OpenGL? You'll be writing in C. Unless one of the companies decides to replace driver interfaces with Rust, any application using them will be dependent on N bindings working.

You're ultimately not escaping C/C++ for any systems development. You either deal with the complexity of interfacing between language A and C/C++ or just deal with the quirks of C/C++ themselves. Pick your poison.

0. https://github.com/rust-qt/ritual


There were multiple attempts at such bindings. The most promising is this one:

https://github.com/KDE/rust-qt-binding-generator

> You're ultimately not escaping C/C++ for any systems development.

We eventually should. C/C++ should retire even for drivers and kernels. But for now there is still a lot of baggage to deal with indeed.


Rust gpu programming is very possible already, but as of my most recent foray into the area it was still very much a black art getting your build environment set up for it. It’s probably just a matter of time before it becomes an easy thing to do.

I’ve done a fair bit of mixed Rust + CUDA C++ though, and found it to be a very nice way to build high performance code with safe high-level interfaces that someone can grab and use with little to no understanding of GPU architectures. It’s even pretty straightforward to build wrapper types that leverage Rust’s ownership system to track lifetimes and safe management of device buffers as well (unfortunately I can’t release that code but it really was pretty simple so hopefully someone else will soon do it openly, or by now maybe someone already has)


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: