Hacker News new | past | comments | ask | show | jobs | submit login
The year of C++ successor languages (accu.org)
176 points by nikbackm on Jan 2, 2023 | hide | past | favorite | 268 comments



These quotes confused me:

> The Rust language model is based around the so-called borrow checker, which tracks the lifetime of all the objects; thus, it can detect safety errors at compile-time and does not require the use of a garbage collector

> Val solves this problem in an entirely different way: it adds restrictions to references, and ensures that nobody can read an object while somebody else is allowed to change it.

Because this is exactly what Rust does (forbid multiple mutable aliasing). After reading the linked reference (1) the difference isn't the program structure but rather that Rust allows references in values whereas Val does not, which obviates the need for ownership semantics and borrow checking (since there are no borrows!).

So my understanding is that both Rust and Val are providing the same guarantee to the program (exactly one writer or any number of readers to values) but choose to do it in very different ways. The interesting thing from the PL design side is that both implementations leak into the semantics of the language (Rust through explicit lifetime annotation, Val through implicit references). This is exactly the thing that other newer systems language designers seem to gloss over or not understand - you can't slap static analysis onto a compiler (takes like: "we can add optional borrow checking in the future") and get the same safety guarantees. That will only go so far, since more expressive semantics make the analysis impossible.

(1) https://www.jot.fm/issues/issue_2022_02/article2.pdf


> So my understanding is that both Rust and Val are providing the same guarantee to the program (exactly one writer or any number of readers to values) but choose to do it in very different ways. The interesting thing from the PL design side is that both implementations leak into the semantics of the language (Rust through explicit lifetime annotation, Val through implicit references)

Interesting. Can't Val references (based on my reading of https://verdagon.dev/blog/generational-references) be implemented in Rust as smart pointers? It seems that the requirement from the language is ownership (to detect when the reference needs to be freed), besides the error handling.


A solid question, though that post you linked is about Vale, not Val ;) One day I hope to get a drink with the Val, Vala, and VALE folks and have a hearty chuckle at how this happened.

Long story short, generational references like in Vale can't be implemented as efficiently in Rust. The best attempt I've seen so far involves having a side table to store the generations in, which (by my measurements when I tried that exact same approach) is about 4x slower.

Vale makes some pragmatic tradeoffs to be able to store generations inline in the stack, containing struct, or containing array, tradeoffs which I believe are antithetical to Rust's direction.


This is very interesting work. I'd be interested to see a comparison between generational references, compacting GCs (which are quite similar in implementation details), and generational GCs that use more traditional scanning approaches. Particularly when it comes to memory pressure and what the generational references + regions winds up looking like.


Rust didn't originally have borrow checking. It was bolted on later.


This is technically true, but it was "bolted on" in an extremely non-backwards-compatible way†. Tons of APIs had to change throughout the ecosystem, which was thankfully very small at that point.

This points to what I and others mean when I say that you can't just "bolt on" Rust's borrow checking semantics to a language that doesn't have one. You technically could add the borrow checker to C++ or D or Zig or whatever, but it would break so much code that in practice the result would be a new language.

Also, a relatively minor point, but still worth mentioning: Prior to the introduction of the borrow checker, Rust didn't have any way to share memory across multiple threads, which made the migration easier as there was no need to update multithreaded code to be compatible with the borrow checker. But most languages do have shared memory, which would make adding a borrow checker to them that much harder.

† Technically it was a gradual process, starting with the introduction and slow uptake of unique pointers, then the deprecation of @ and migration to Rc/RefCell, and finally a long period of plugging soundness holes, many of those fixes causing small amounts of breakage as I recall.



This is technically true, but somewhat misleading. The borrow checker was added pre1.0, and originally wasn't known to be possible to rely on it exclusively for memory management. It was bolted on later, but it changed the language so much that if Rust had been 1.x, after the change we'd be calling the current iteration of Rust "2.0". As such, any production ready language that attempts the same gambit will have to be ready for a painful split that would make the Python 3 migration look like a walk in the park.


The problem with the Python transition is that the language is so flexible that it becomes challenging to understand if/where/when any code will be impacted by a change. Unit testing becomes the static type checker (less so now that Python typing is spreading).

For a statically compiled, AOT language, it should be easier to migrate the bulk of the code and identify the hairy bits. Which is not to say it would be easy or fast, but I think Python, Perl or other flexible languages are worst case examples for breaking language change migrations.



That Val site provides the following example, which makes no sense to me:

  public fun main() {
    let weight = 1.0
    print(weight) // 1.0
    let length = 2.0
    print(length) // 1.0
  }
Is the behaviour of print "Just print whatever the first variable was forever" ? Is this example wrong ?


That is almost certainly just a typo. That example is meant to help explain Val's lifetimes in the most basic possible way.


I wondered the same. I also guess this is a typo.


Great talk on CppFront given recently at cppcon: https://www.youtube.com/watch?v=ELeZAKCN4tY


Article fails to mention that autocxx and crubit are aiming for much improved interop between Rust and C++. Sure they're experimental, but not more so than Val.

Rust devs are also pushing for making more of its core and std library "unsafe" friendly, to reduce the risk of unsound calls to Safe Rust from an unsafe environment with possible aliasing, pinned-in-place objects, self-references, uninitialized ("write-only") memory etc. etc. So that in itself will also deliver improved semantics between Rust and C++.


It's pretty strange. Rust is dismissed because interop with C++ is hard, but then Val is cheered on despite its C++ interop basically being a big "todo" item.


Also just saying a language's C++ interop is better simply because it's easier doesn't really pass the smell test for me. If I need to write some interop for a C++ library, I'm going to prefer one that's more reliable than one that's easy but can fail in weird ways. And unless my project is an interoperability library, I'm not sold on "harder = worse" when I'm most likely going to write the interop once, ever, and be done with it. If it takes me a day, but then I can rely on it to function well and fail in predictable ways, I'm still on board. There's nothing wrong with something being difficult if the result removes a ton of mental load.

I'm not a fan of this article's conclusions at all.


Not to mention efficiency. Sure, it's 'easier' to make a copy of all the data when crossing the FFI boundary


Author admits his own bias and preference at the end.


> I do need to confess: in my spare time, I have started working with the Val team

Indeed, I think this should have been made clear in the first section. I thought when the author said they have a bias it was more in the general sense, not that they are working on one of the languages in question.


Yes, misleading.


Can you explain what you mean, about unsafe friendliness in the standard library?


I believe this is referring to the interop abi that is in planning.

It aims to expose a nicer way to glue rust idioms to similar or equivalent idioms in other languages.

https://github.com/rust-lang/rust/pull/105586


Not the poster of the comment but I'd like to hear from the downvoter why this was downvoted.

As a Rust novice, interpreting the comment GP made (from the perspective of, for example, C++ calling Safe Rust core or stdlib) this seems like a valid response.


Would the downvoter like to comment as to why I too got downvoted?


It's 'interesting' that the author critices Go for having a GC making it 'inappropriate' as a system programming language, but then also critices Go for not having exceptions (which are IMHO at least as 'inapproriate' for that type of language).

If anything, the latest round of systems programming languages which all use error unions instead of exceptions have demonstrated quite clearly that exceptions are actually quite pointless.


System programming doesn't mean kernel and embedded only. The huge portion of C++ programs are running in the user space because one may still want performance benefits, direct access to memory via well-defined struct layout and direct calls to the system libraries to get maximum functionality from the OS. User space programs can handle exceptions.


Sure, but that argument also applies for GC.

I suppose there's a window where you can't tolerate GC pauses but can have exceptions allocating memory unpredictably, but it's a pretty small window.


On the contrary, that window is both huge and also you're assuming exceptions require expensive allocations but they don't.

Exceptions are for exceptional situations, so the performance impact of them when thrown is rarely of concern - both in C++ and in pretty much every other language with exceptions. There's a reason something like Java still uses error return values for an awful lot of things, after all. But importantly exceptions means that you're not doing return value checking for errors that rarely happen, which can get expensive on the whole. As in, exceptions gives your code a path to both be cleaner and faster on average - they are really very useful when implemented well.

The problem with C++ exceptions has nothing to do with the memory allocation cost when thrown (which as mentioned, there's other strategies you can deploy than the default of just `new`). Rather it comes from the significant impact on binary size and the reliance on RTTI.


The bloat from the exception tables isn't really ideal, but it is the tradeoff of allowing for exceptions to be zero cost in the non-exceptional cases.

I'm not sure RTTI for exceptions is such a big deal.

I'm guessing you are referring instead to the the fact that exception catching needs pull in most of the dynamic casting machinery in many cases, since exception hierarchies at least in theory can have multiple inheritance, virtual inheritance, and other complications, which means that not only is determining if a specific exception thrown is compatible with a given catch clause non-trivial at runtime, but even once determined, the cast may be non-trivial, not to mention the possibility of catching by value needing to invoke a copy constructor.

If a language catches by type, some form of run time type indication in the exception object is fundamentally needed if the language allows throwing an exception whose concrete type is not known at initial throw time. C++ is such a language, as you can throw an exception that was allocated elsewhere, and which you received via a pointer to a non-final class.

In a language where the concrete type is always known at initial exception throw time, then the stack unwinding code could conceptually simply identify any catch blocks that would apply from data in the exception tables because all superclasses are statically known. And it could pregenerate code for any possible upcasting or copy construction needed, so no dynamic casting machinery would be needed.

(But many other static languages with exceptions only allow single inheritance, don't allow catching by interface, and only allow catching by reference, so no fancy copying or upcasting code is needed. Almost all of them do allow throwing without knowing the concrete type, so they need some form of RTTI data nevertheless.)


Which is how you get zero cost unless thrown. You're gonna have to pay for something.


The RTTI requirement is more of an issue since that's forcing RTTI on for all types, not just those that can be thrown as exceptions. An explicit "throws" syntax would eliminate that (since the thrown type is now statically known and doesn't require RTTI) and thus significantly cut down on the cost.

Alternatively, and this isn't very "C++" but would work, it could be required that all thrown exceptions must inherit from a base "Exception" type such that you then only need to require RTTI for that chain of types instead of all types.


'Performance benefits' may also include low and/or predictable latency. GC is a real problem there. Not insurmountable by any means (witness the number of people who use java for latency sensitive applications), but a problem nonetheless.

Edit: Also, depending on how they're used, exceptions can be easier to avoid than GC. IOW, if an exception has been thrown at all, you may well have bigger problems than the latency cost of throwing.


Not a fan of exceptions, but the runtime cost of exceptions vs. the overall runtime cost of GC are not even in the same ballpark.


You can have GC engine without stopping the world. Completely pauseless.


How?


By analyzing mutators activity. Only the compacting GCs have to stop the world.


Unsure what you mean by "mutators". I am guessing "pointers that can be used to modify data at a memory address"

If so, how can you (as a GC) be sure that while you were adding up all the pointers to an address, and finding none, that another one was not made?

Mystified


Mutators are threads that allocate memory and manipulate pointers, they can work completely independently of the GC. A mutator needs only to tag an object when copies or moves a pointer to this object. The GC detects this tag and marks the object as alive. Here is a working implementation for C++: https://github.com/pebal/sgcl


I could listen to you or whoever talk about this all day. Just on the chance you know a good one, do you know any good conference talks or podcasts I can listen to on the same topic?


You can watch Herb Sutter talk about deferred_ptr: https://youtu.be/JfmTagWcqoE


But how can it Markit as dead as the mutatorthread runs?

It must stop the thread surely?


SGCL never stops the threads.


Go does have exceptions, via panic/recover. You could argue that Rust lacks true exceptions because a Rust panic can be configured to be fatal at the whole-program level. (Which is actually great for deep-embedded scenarios where even C++ use is with no-exceptions support.)


Those are not exceptions by most reasonable definitions. Exceptions are a general purpose error handling method. Try, catch and all that. They are almost always objects with multiple types.

Go and Rust's panic is for unrecoverable errors. They both have the ability to catch panics because sometimes you really need to do that (e.g. when interacting with FFI or sometimes multithreaded code).

They are not exceptions though. I've seen this myth repeated a few times lately (always about Go and not Rust for some reason) and I wish it would die.

You wouldn't say C "has exceptions" would you?


> You wouldn't say C "has exceptions" would you?

Maybe that's just me, but I wish more people would say that C "has exceptions", if only because longjmp/setjmp[0] interacts with exception systems of other languages, sometimes badly.

That being said, I think an important difference is that in C setjmp/longjmp do not "unwind" objects, due to the lack of destructors.

However panics at least in Rust are just exceptions by another name. The difference solely comes from culture. Even the fact that unwind can be turned to abort is not significant, considering that pretty much all production implementations of C++ have fno-exceptions

[0]: https://learn.microsoft.com/en-us/cpp/cpp/using-setjmp-longj...


> That being said, I think an important difference is that in C setjmp/longjmp do not "unwind" objects, due to the lack of destructors.

I don't think that is a requirement of exceptions. It's just a sensible thing that most implementations do.

> The difference solely comes from culture.

Maybe, but that is a huge difference! You can implement anything in any Turing complete language but you wouldn't say that they all "have" every feature...

To quote https://doc.rust-lang.org/std/panic/fn.catch_unwind.html :

> It is *not* recommended to use this function for a general try/catch mechanism. The Result type is more appropriate to use for functions that can fail on a regular basis. Additionally, this function is not guaranteed to catch all panics, see the “Notes” section below.


That’s being stuck on usage patterns. Any automatic stack unwinding mechanism is the same. Whether C++ exceptions and Rust’s panics.


Yeah, it's funny how go does have exceptions, but no notion of exception safety. Manual mutex locks/unlocks are everywhere, and don't even start me on defer, which is just terrible


That's how you get a "simple" language.


> the latest round of systems programming languages which all use error unions instead of exceptions have demonstrated quite clearly that exceptions are actually quite pointless.

What they have actually shown is that error unions are not a panacea and a pain to handle manually. And that hardcore killing your app in the presence of even the tiniest of unhandled errors isn't suitable for any programming, especially systems programming.

That's why Rust ended up introducing try!, ? and catching panics. Go also added panic recovery.


I'd argue that they have demonstrated that:

- unions are not a panacea and are a pain to handle manually, as you wrote;

- but with a little syntactic sugar, they turn out to work really, really well, much better than C++-style exceptions.

There's something to be said for OCaml-style exceptions, which are actually even closer to zero-cost, but I wouldn't call OCaml a systems programming language.

Writing this as one of the persons who advocated for try! around the time of Rust 0.4 :)


try! and '?' is just a syntactic convenience for passing error returns to the caller. It's not even monad-like because the outer code still has to wrap the happy path return with Some(...) or Ok(...), which wouldn't be the case with an actual monad.

(I.e. it's quite different from what was done with "async fn" support, where the Future return type is hidden via the 'async' specifier and the control flow is totally changed.)


> try! and '?' is just a syntactic convenience for passing error returns to the caller.

Indeed. Because manually handling those are a pain in the butt. So they made a shortcut. That still needs to be handled by someone up in the hierarchy. Exceptions in all but name.

> It's not even monad-like because

No one cares whether it's monads or not.


The crucial thing about Try (the ? operator) is actually pretty easy to see if you look at the Trait which makes it go:

The result of the branch method on Try is ControlFlow<Self::Residual, Self::Output>

Try isn't stable, but ControlFlow is. They've reified the control flow! Rust has a type which represents the idea of a decision whether to return early. This is in my opinion pure genius, and it happened almost by accident. It seemed natural at first that the decision to return early is manifested directly in Result, but it isn't. That's the insight. Failure and returning early are distinct ideas, and we might just as easily want to return early in consequence of success as failure.

In separating "Success versus Failure" from "Return early versus keep going" Rust gets a lot more value here than is encapsulated in C++ Exceptions.


> Exceptions in all but name.

But with added space and branch-predictor overhead in every call site, and runtime overhead in every call event!


> Exceptions in all but name.

In particular, checked exceptions.

This is a very interesting article that discusses the isomorphism between checked exceptions and error return types, they ended up with checked exceptions where a function Foo that may throw must be invoked with "try Foo()" - very similar to Rust's ADT-based macro/syntax.

https://joeduffyblog.com/2016/02/07/the-error-model/


There is nothing about exceptions that's inappropriate for system programming.

If anything it enables the enforcement of strong invariants and leads to better and safer code.


> There is nothing about exceptions that's inappropriate for system programming.

There's lots about exceptions which is inappropriate for system programming, starting from FFI unsafety and the lack of signaling to callers (which makes resilient use more difficult).

> If anything it enables the enforcement of strong invariants

It doesn't do that.

> and leads to better and safer code.

It only does that in comparison to truly deficient (e.g. c-style) error reporting, and that's being generous.


Well it seems you don't understand exceptions. They eliminate erroneous states entirely, since the objects just don't get created if an error occurs.

The alternative that the parent said was making all of your state be a union with some kind of error, and making sure all accesses handle the fact the variable might be in a erroneous state. That is a huge explosion of possible states in your program, and essentially making every invariant weak everywhere.

Then FFI, I suppose you mean interfacing with C. Problems that arise when interfacing with other programming languages are orthogonal to a language's ability to be used for system programming. Obviously you wouldn't let an exception propagate through some C code, that's forbidden.


> Well it seems you don't understand exceptions. They eliminate erroneous states entirely, since the objects just don't get created if an error occurs.

Error sum types do the exact same thing.

> The alternative that the parent said was making all of your state be a union with some kind of error, and making sure all accesses handle the fact the variable might be in a erroneous state.

Which is a non-issue as it is lifted to the type system. The type system will not let you forget about that.

> That is a huge explosion of possible states in your program, and essentially making every invariant weak everywhere.

You get an error state added to a given value, which you also get via exceptions, except implicitly and without notification of the additional state.

Type-safe error values also provide simpler error handling and recovery in many case, because they don't require split-path handling.

> Then FFI, I suppose you mean interfacing with C. Problems that arise when interfacing with other programming languages are orthogonal to a language's ability to be used for system programming.

It very much isn't, part of the system programming workload is to provide reusable components.

> Obviously you wouldn't let an exception propagate through some C code, that's forbidden.

And rarely if every checkable statically, hence unsafe.


> You get an error state added to a given value, which you also get via exceptions, except implicitly and without notification of the additional state.

This is incorrect. With error return values you're adding a branch to every function call which is quite expensive on the whole. You're adding i-cache pressure & you're adding branch prediction pressure.

Exceptions in nearly every language that supports them (including C++) don't go through return values at all. Rather when thrown the stack is walked to find an exception handler. So exceptions are more expensive to throw than return values, but completely free when not thrown unlike return values.

> It very much isn't, part of the system programming workload is to provide reusable components.

There's absolutely no issue with exceptions & library boundaries in general. Statically checked exceptions also exist (see Java - although there's a big debate on if that's a good idea or not, but also see https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p07... ), I'm not sure why you're arguing as if they don't.


> So exceptions […] completely free when not thrown unlike return values.

In C++, they are not. Since C++ allows objects to be created on the stack within the each current scope, every constructor call for such a new object has to register its destructor with the exception handler's object cleanup jump tables.

Consider an edge case with a loop where 100k local objects are created. Each constructor invocation will incur an overhead of an indexed memory store of the destructor's address in the exception handler's table[0] for each object.

An indexed memory store is typically 1x instruction for a CISC architecture (it can be more if the data section is located too «far» in the address space, and the ISA limits the offset width in the instruction encoding). It is typically several load low and shift, load low and store instructions on a RISC architecture. L1 D-cache and L2 cache, sometimes L3 cache as well (if the exception table grows large), and TLB reloads[1] get involved at all times. All of the aforementioned is just to register a destructor for an exception that might not occur. So, no, it is not free and (occasionally) the time dimension is not even clearly defined.

This is rather unique and specific to C++ because it is an outlier and allows new objects to be created on the heap AND also on the local stack. Languages that allow new objects to be only created on the heap do not incur such an overhead.

[0] Bonus point: 100k local objects being created inside a try/catch block will also blow the jump table out of proportions and add more cache pressure and cache line reloads.

[1] And even page faults might occur – if the exception table crosses memory pages.


Calling destructors on scope exit is a C++ language feature with or without exceptions. I'm no expert but I believe these addresses can be determined relative to the stack frame pointer, so they don't need to be registered in advance. Instead extra code is created to call all those destructors; this code is only ever called if an exception is thrown. That results larger executables, but no extra CPU instructions are executed unless an exception is actually thrown.

(In theory a compiler-writer could try to reuse destructor-calling code between the normal exit case and the exception case, but that might force one extra branch in the non-exception case.)


> So exceptions are more expensive to throw than return values, but completely free when not thrown unlike return values

Is that true though? Placing exception handlers in the stack, so examining every stack frame for one, is equivalent to testing the sum type to see if it is in error state, surely?

My disclaimer: I know very little about compilers, so this is an actual question.


You don't put exceptions handlers on every frame.


As the stack unrolls where does it know to find the handlers? It must (?) unroll the stack


That's only when an exception is thrown, though. If an exception isn't thrown, "unroll the stack" is just a normal `ret` instruction. There's no exception handling code at all when a function returns normally without an exception, which is the point. By contrast when an error sum type returns without an error, you're still doing a branch at the call site to verify that.

Here's a simple example showing exceptions vs. sum types: https://godbolt.org/z/9Er6eKxs9

In the non-throwing exception path, there's literally no error or exception handling code executed at all. Whereas in the sum-type error-returning version, you have a branch at every call site that's always executed regardless of if there's an error or not.

Now the exception handler generates ".cold" clones of the function, so the total assembly for the exception handling one is larger. However, that assembly isn't every executed if an exception isn't thrown, which is the broader point. So it's not taking up CPU cache space & it's not taking up branch predictor slots.


The point is there is no advantage to throwing an exception VS. using sum type return values (e.g. Result types in Rust)


That's a bad point? Exceptions are not normal control flow. They are rare or, as one might say, exceptional. The performance of them when thrown isn't of key concern, it's the performance when they are not thrown that matters since that's the >90% case. And in that case, code using exceptions is faster than code using sum type return values, especially if those errors propagate deeply across the call stack which they very often do.


You mean destructors? An exception handler would be a catch block.

Anyway, the typical implementation involves two phases, one which uses a table to identify the matxhing catch clause, then another one going through landing pads for each frame of the stack. Just consult the Itanium ABI spec for technical details.


The problem is not "forgetting about it", it's that it increases the possible values of your working set.

If you have 3 variables, each of which can be in 10 states, that's 10^3 states your program can be in.

If instead you have 11 states because all your variables are actually unions with an error, that's 11^3 states (assuming all error states are equivalent to a single state).

Now in practice it's even worse since what you care about isn't the possible states of your values, but rather how many different paths you have in your control flow to handle them.

Then you're comparing 1 (none of my values are in an erroneous state) with 2^3=8 (any of my values can be in an erroneous state or not).

What exceptions do is enforcing that your working set does not have to encode any erroneous states, preventing the combinatorial explosion of states, which of course is a net win, there isn't really any valid argument that can be made against it.

Where people are debating is that sometimes you do want errors to be part of your working set, in which case you shouldn't use exceptions. But choice is difficult for some, especially those seeking absolute doctrines.

> It very much isn't, part of the system programming workload is to provide reusable components.

That's already somewhat dubious, since a lot of system programming tasks are really purpose-built for a usecase or for specific hardware, and regardless, there is nothing about that which has anything to do with interfacing with C.

I do a lot of system programming and I write it all in C++, which has a lot of advantages over C beyond exceptions.


> If instead you have 11 states because all your variables are actually unions with an error, that's 11^3 states (assuming all error states are equivalent to a single state).

This isn't what actually happens though, what actually happens is that people declare local and member variables that are the_type_i_actually_want instead of Result<err, the_type_i_actually_want> and bubble up their errors like they would exceptions. So they get the benefits that you've claimed, but they don't need to pay the runtime cost of not-thrown exceptions that C++ users have to pay, they don't have to use external tools to tell them that functions they're using can throw exceptions, and they don't have to enjoy the wonders of Java where checked exceptions in function signatures regularly prevent the use of streams.


You're conflating recoverable errors (Result in Rust, status codes or std::expected in C++) with the non-recoverable errors (panic in Rust, exceptions in C++).

If we were to compare Rust panics vs C++ exceptions, then handling of Rust panics is much less flexible. From what I understand, it's essentially a std::abort and it can be hardly used otherwise, which is only a subset of how C++ exceptions can be commonly used too.

If we were to compare Rust Result vs C++ std::expected, they boil down to pretty much the same with the difference of Rust requiring the call-site to unconditionally check for the return value. That may or may not be preferable in every situation.

> they don't need to pay the runtime cost of not-thrown exceptions that C++ users have to pay

Had this been true, which in 99% of cases it isn't unless you can support your claim, do you mind sharing how Rust implemented their zero-cost panics?


They're not conflating.

mgaunard says:

> The alternative that the parent said was making all of your state be a union > with some kind of error, and making sure all accesses handle the fact the > variable might be in a erroneous state.

This is exactly what `Result` is in Rust. While I haven't used Rust, it seems that panics are generally discouraged and only used as a last resort whereas exceptions are more commonly used in C++ and Java.


Yes, they are. They are basing their argument by comparing Rust Result against C++ exceptions in the context of general error handling whereas I pointed out that there are actually two classes of errors and both of which are addressed their own appropriate mechanisms in both Rust and C++.

What parent comment tried to (wrongly) imply, and your comment as well, is that exceptions in C++ are (commonly) used as a control flow mechanism. And they are not.


There is no cost to exceptions that are not thrown.

On the contrary, the approach you describe introduce a lot of overhead, since it affects all code paths, the function call ABI, jumps after every function call etc.

Also in C++ you have operators that are integrated in the type system and are resolved at compile-time to know whether an given expression can throw an exception or not. Do not confuse C++ with Java.


> There is no cost to exceptions that are not thrown.

This is not true for a variety of reasons, but the main ones are maybe missed optimizations and otherwise-unnecessary spills of objects into memory so that their destructors may be called.

> Also in C++ you have operators that are integrated in the type system and are resolved at compile-time to know whether an given expression can throw an exception or not.

Maybe if you only have a single TU or LTO? In general any function from another TU can throw an exception so you don't have this.


> This is not true for a variety of reasons, but the main ones are maybe missed optimizations and otherwise-unnecessary spills of objects into memory so that their destructors may be called.

The missed optimization opportunity you describe only affects the Windows ABI, designed in 1989.

> Maybe if you only have a single TU or LTO?

Whether a function can throw or not is part of its signature.


> There is no cost to exceptions that are not thrown.

Oh yes, there is. C++ compiler has to emit unwind tables, register destructors for RAII resources and generate the RTTI information (where applicable).

In this trivial example, consider and compare two versions, the first does not have an exception handler, the second one wrap a single constructor call with a dummy try/catch block:

– No exception handling: https://godbolt.org/z/MK1bof45d

vs

– With a dummy try/catch block: https://godbolt.org/z/hT4Efez1h

For the latter one, the object file size is up by 1kB instantly by virtue of adding a no-op exception handler. Exception handling implementation is not standardised and varies across different compilers AND also across different runtimes. Due to space constraints, the C++ exceptions are oftentimes a big no-no in the embedded world due to the space and time cost the language imposes. As well as long gone are the days when a «try» was a «setjmp» plus a few bells and whistles and «throw ecx;» was a «longjmp».


Yeah, that's why exceptions were created back then. They got rid of a lot of extraneous branches in exchange for a small, nearly constant cost on your function calls.

But with decades gone, things changed. That constant costs is relatively not so small anymore, and those branches are much cheaper now.


Your reasoning is off. You don't have "10^3" states if you always unwrap the return values at the call sites (which implies returning if it fails). It's literally the same as exceptions, just that the errors get encoded by (re-)using the type system. You'll have the exact same types for your local variables -- the only difference being that you would put a '?' (or similar) after function calls, to unwrap the return values.

The advantage of this ADT approach is that you can store error unions more permanently when it makes sense. It is not additional syntax, unlike exceptions. In that sense ADTs are the simpler approach of the two. If there is any "explosion of complexity", then it is exceptions where you get that -- because you have to express your code using multiple mechanisms (types vs exceptions), and possibly have to switch between the two when refactoring.

I say that as someone who doesn't think highly of either approach. In my view, plain error values are fine, there isn't any clever language solution needed. If you find yourself checking return values a lot (as opposed to storing error values in handles and checking them at strategic locations), that can hint an architectural problem.


By unwrapping and returning, you're creating another path down the control flow of your program, which also propagates to your callee, since you have to return an error.

Exceptions don't do that, they stop the flow entirely, then match it to a point arbitrarily higher on the stack, and resume after that whole sub-tree has been destructed.

They're also much more efficient than branching and maybe returning on the result of every single function call.


> Exceptions don't do that

Exceptions actually do that, except hidden and unsignaled.

> They're also much more efficient than branching and maybe returning on the result of every single function call.

Not when actually taken.


> Exceptions actually do that, except hidden and unsignaled.

Which IMO is good in exactly one situation: when raising the exception means that the program contains a bug.

Using panics (ah sorry... exceptions) in this case is justified as it should be really exceptional (if there is a bug anyway we have more pressing problems than performance) and in the absence of bug if we were to use a Result type it would mean we would have a "BugError" variant that is actually dead code everywhere where the program is not buggy.

So in my opinion a correct approach is to unwrap whenever you have an invariant that guarantees that there should be a value, with a panic handler set at the boundary of the logical task to fail the entire logical task in case there is a bug. A logical task can be an asynchronous light task, a thread, or the whole process depending on the situation.

I much prefer it not being the whole process when the process is e.g. a web server or a word processor (and the failure occurred somewhere in an ancillary function)


I don't see a reason why the compiler couldn't implement error-sum return values the same way that exceptions are typically implemented (the way you describe).

(I don't see why it should, either. The blanket "efficiency" argument is unconvincing to me).

Ok, I see one reason: The programmer might want control which implementation is used. That would require an additional mini-feature in the language syntax/function types. But this still wouldn't be an argument for a whole different syntax and forced separate code paths as required for traditional exceptions. And it's theoretic anyway -- I don't think it's important to give the user this "control".


This doesn't track at all for me. Rust provides strong guarantees around accessing discriminated unions. The net effect of which is that the code you write has the "railway style" error handling that you get with exceptions in the trivial case (propagate the error). It even has a convenient syntactic shorthand for this `?`.

In non-trivial cases they are equivalent too. For example, collections need to maintain at a minimum a valid state in the presence of types with exception-throwing (fallible) constructors. This is a mess with or without exceptions in basically the same way. It's such a mess that the C++ standard allows for unspecified behavior of `std::vector::push_back` if the contained type has a throwing move constructor. Throwing move constructors are of course ridiculous but nonetheless allowed.

And that I would say is the biggest flaw with exceptions: they presume the fallibility of everything by default. This is not only brain damaging, it actively creates situations where there are no good options.


> If you have 3 variables, each of which can be in 10 states, that's 10^3 states your program can be in.

> If instead you have 11 states because all your variables are actually unions with an error, that's 11^3 states (assuming all error states are equivalent to a single state).

> Now in practice it's even worse since what you care about isn't the possible states of your values, but rather how many different paths you have in your control flow to handle them.

You're really demonstrating that you have no clue about the subject and refuse to think about it.

If the current function does not deal to specifically deal with erroneous results (aka it would be a passthrough for exceptions) then it unifies the error states into one, by either pruning their branches through early-returning, or unifying the triplet of results into a result of triplet.

Hence you don't have 11^3 states but 10^3 + 1.

> What exceptions do is enforcing that your working set does not have to encode any erroneous states, preventing the combinatorial explosion of states, which of course is a net win, there isn't really any valid argument that can be made against it.

The problem is that none of that is actually true, you're literally inventing combinatorial explosions which effectively don't exist.

Unless they would have to in all cases at which point exception would lead to a significantly worse combinatorial explosion, because exceptions would not allow representing the product of 11 states as just that, and instead would need 20^3 states as every possible value would have to be paired with two error states, success and failure.

> That's already somewhat dubious

It really is not.

> there is nothing about that which has anything to do with interfacing with C.

The C (or system) ABI is the linga franca of inter-language communication, unless you decide to pay for a network cost.

> I do a lot of system programming and I write it all in C++, which has a lot of advantages over C beyond exceptions.

And plenty of drawbacks as well.

But if all you know is C and C++ and you see the entire world through that lens, I can see why you're missing most of the field, you're essentially blind.


Exception prevent the control flow from continuing, which prevents the creation of those states which happens further down.

I find your tone too inadequate to engage further with you though.


> What exceptions do is enforcing that your working set does not have to encode any erroneous states

You do that by having types that encode a guaranteed non-erroneous state. It's not like exceptions are doing anything all that different, they're just trying to establish that guarantee in a language where variant record types and pattern matching are not first-class facilities.

This is something where C and C++ actually regressed from PASCAL, which did have support for variant records.


A variant does not make that guarantee, it just segregates it.


Same for exceptions really. Exceptions don't give any guarantee of non-erroneous state. The guarantees that you're talking about actually come from how construction and deconstruction work in C++ (note how it plays with early returns just fine, no exceptions needed). And these construction semantics can be implemented with variant types as well, it's completely unrelated.


The prevent the control flow from continuing in that direction, which prevents those variables from ever existing.

Early return is nothing like exceptions. Early returns needs to return something which passes the problem to someone else. It's also a choice to do it at all.


You're completely missing my point. The point is that both prevent the control flow from continuing in that direction. Both prevent the variables declared later to ever "exist".


>What exceptions do is enforcing that your working set does not have to encode any erroneous states, preventing the combinatorial explosion of states, which of course is a net win, there isn't really any valid argument that can be made against it.

A combinatorial explosion of states is not a bad thing. Integers in C++ for example have 4294967296 possible states. Programming is not descending into complete chaos just because one of the fundamental types has more possible states then the human brain is capable of handling.

You're describing using exceptions as a catch all fail-safe. It's isomorphic to the the "else" statement in your standard if-else structure which is one of the techniques people use to handle the 4294967296 possible states of int. See example code below on this amazing technique I use to deal with 4294967296 possible branching possibilities:

   if(x == 0){
      // do something
   } else {
      //handle all 4294967295 other states. 
   }
>Where people are debating is that sometimes you do want errors to be part of your working set, in which case you shouldn't use exceptions. But choice is difficult for some, especially those seeking absolute doctrines.

In every other engineering field you do want this as part of your design. You want to know about every possible state your system can be in and handle the states explicitly. Unknown states that are not explicitly encoded into an engineering design is typically a Bad thing.

That is not to say you should design your system and not acknowledge the possibility of an unknown state. You need fail-safes like exceptions to handle these unknown states. But make no mistake, it's not good to have fail-safes regularly executing to catch a bunch of states you failed to encode into your system.

A good example of this is corrected design of the MCAS on the boeing 737 max. The MCAS should not use a fail-safe handle the crash modes we are now well aware about. The MCAS should explicitly be encoded with our knowledge about the new possible error modes. I certainly don't want to sit in a plane where this hasn't been done.

I will also say that much of programming doesn't need the level of safety other engineering products need. Shipping products faster at the cost of quality is something unique to software as the quality can be improved AFTER shipping, so that is not to say your way of using exceptions to catch unknown states (or states not explicitly encoded into the system) is completely wrong; but is certainly not best practice or ideal.


You don't need exceptions for constructors. You can just use static factory functions with an error return as Rust does, and dispense with constructors altogether.


Can you enforce at compile-time that only such a static factory can ever be used for creating the object in Rust? This is the whole point of constructors, they cannot not be used. Otherwise you're just one commit away of creating an object that will not respect the invariants - will you even remember to call this specific factory function in 6 months?


Sure, just make the fields of the struct private and you can make a public constructor:

https://doc.rust-lang.org/rust-by-example/mod/struct_visibil...


Yeah weird question because you can use the exact same style in C++.


To me, constructors are mostly a convenience thing for the simple cases where I declare a quick container on the stack or similar, and don't want to waste keypresses to have it constructed. And to me it's a question of, what does the language want to be -- maybe this kind of code is better served by languages like C#.

There are various practices that handle the problem of having to call a specific function to get an object in a specific state, without requiring language support. You can make the function that should be used stand out in an obvious way. You can hide the definition of a structure, which very

If it is the whole point of constructors to guarantee that the object is in the right state, it could be the best argument why Rust does not have constructors. Programming is chock-full of "this must be called only by that or in this or that context..." and practically speaking, only a part of them can be handled by language objects and construct/deconstruct semantics.

Plus, C++ gives enough escape hatches to get not-constructed objects or to un-construct objects without them going out of scope. These guarantees that C++ provides (but not really), they require a ton of baggage like ridiculous initializer lists or 17 different kinds of constructors (to the point where it's sometimes almost impossible to tell which will be called), or requiring an out-of-band mechanism (exceptions) to signal construction failure.... that's not worth it in my book.


You get this by default for any object with non-public fields.


Enjoy your combinatorial explosion of states.

It's not equivalent at all.


Could you elaborate? Rust manages pretty well without constructors and I'm nearly certain that it doesn't have any kind of combinatorial explosion of states. Same for ML-family languages.


See other parts of the thread.


That’s really not an issue, as pointed out in other parts of the thread.


> They eliminate erroneous states entirely, since the objects just don't get created if an error occurs.

EPIPE

Erroneous states can arise outside of your control at any time.


Exceptions have their uses, including in system programming. There’s in fact nothing about system programming which makes a particular error handling method better or worse. These are the kind of minor points some programmers like to fixate on and then sell as the one true way of doing X, while providing no proof and asking everyone to trust them, because it worked great in a project once for the comment author.

Not letting exceptions escape at API boundaries has been a technique for a few decades. It’s not rocket science.

Writing exception-safe code is likewise an ancient technique by now. Meyers’ books which explained such things were published in the 90s…


Signaling to caller has already been shown to be a complete mistake with java checked exceptions, I don't understand why people persist with this. It makes for ridiculous code.


> Signaling to caller has already been shown to be a complete mistake with java checked exceptions

It's not signalling which has been shown to be a complete mistake, but checked exceptions, more specifically as implemented by Java.

Signalling works just fine and is actually rather enjoyable when done well.


Can you point to an example? What has "signaled exceptions"?


> Can you point to an example?

Rust, Swift, Erlang, Haskell, ...

> What has "signaled exceptions"?

Java.

It's not clear what you're asking.

My comment noted that signalling the possibility of errors to caller (in general) is valuable.

jcelerier objected with java's checked exceptions as a counter-example, as they signal to the caller, but they're shit.

However that's not a counter example, that's just java's checked exceptions being a shitty implementation of the idea.


Are you saying that "not signaling" has not been shown to be a total maintenance nightmare as well?

Because the only way it is not ever a maintenance nightmare is when you really don't care that function execution could be aborted at any point without any visual clue. And the only way I can see you wouldn't care is when you go 100% in to everything-context managed / RAII or similar. And that again, sorry but I can't be arsed to prematurely rip everything into little pieces like that. It makes for terrible code IMO.


But you can be "arsed" to write an error handler after almost every single function call?


I explained here that I consider having to check many return values to be a smell: https://news.ycombinator.com/item?id=34217251

And from my own experience, no I don't have to check a lot of return values.


> enables the enforcement of strong invariants

My experience has been the opposite. Ensuring exception safety in a type that has nontrivial move/copy operators (that is, a type that for whatever reason can’t follow the “rule of zero”) is often a research-level problem. Not having to worry about that in Rust is such a breath of fresh air.


No, you do have to worry about that too in Unsafe Rust if you want to achieve memory safety. (If you're writing only Safe Rust then probably not, but then that also applies to C++ if you're adhering to the rule of zero and extensively use the STL containers for all your memory allocation needs.)

Exceptions do exist in Rust, and you do need to catch it explicitly at the FFI boundary. [1][2] And the programmer needs to take care their abstractions are safe with stack unwinding when writing any kind of unsafe code in Rust. (For an example: [3])

[1] https://doc.rust-lang.org/nomicon/unwinding.html

[2] https://doc.rust-lang.org/std/panic/fn.catch_unwind.html

[3] https://doc.rust-lang.org/nomicon/exception-safety.html


Enforcing basic exception safety is trivial, you just have to follow very simple rules.

Enforcing strong exception safety might require some thought, but it's definitely not "a research-level problem".

In any case either of these is miles easier than satisfying the Rust borrow checker, unless you use the cop-out of (A)rc.

Regardless, how the invariants of your objects are maintained in case of operation failure is something you should be thinking of regardless of the language.


> Enforcing basic exception safety is trivial, you just have to follow very simple rules.

That's not my experience as a C++ developer in a complex, cross-platform, application, which needs to:

1. interact with C;

2. operate with an event loop;

3. operate/interact with a GC;

4. interact with non-trivial system libraries (e.g. Direct3D, Vulkan, ...)

> In any case either of these is miles easier than satisfying the Rust borrow checker, unless you use the cop-out of (A)rc.

The borrow checker is indeed complicated. I'm not sure how you define "satisfying", though. As for (A)rc, it can definitely be interpreted as a "cop-out" or as delaying optimization until you actually have good reasons to believe that you need it.


I'm sorry to hear that you haven't been successful in using C++ features to their full potential in environments tightled coupled with C libraries. Integration with C or C-like code usually requires some effort if you want to be able to use exceptions that could propagate through C.

I do not offer consulting but can refer you to people who do.


Thanks. Do you think they'll have time to rewrite Firefox? :) (or Chrome, which encounters the same issues)


Well, from what you were saying, it's mostly a problem of making your asynchronous framework work well with exceptions. It is true that you need to do special things for asynchronous programming to work well in C++, be it for exceptions or even the scope-bound lifetime management of C++ in general, which all have a huge impact on the design of your system.

In particular most multi-threaded C++ code is incorrect, not because it is impossible to do it correctly, but because the standard tooling is too low-level, each third-party framework targets a different niche, and people who roll their own tend to hack it together.

I understand Seastar is supposed to do it somewhat correctly, so you could suggest to Mozilla that they switch to that.


Maybe switch to the SerenityOS browser.


I'll be sure to suggest that to Mozilla and Google :)


>If anything it enables the enforcement of strong invariants and leads to better and safer code.

How?? If a container have `get(K)->V` and `remove(K)->V` then how does it preserve this invariant? This is an impossible contract to satisfy once you try to push once and pop twice. The container is promising you something that it can't satisfy, I would rather have a container that's honest with `get(key)->Maybe(value)`.


You have it backwards, the strong invariant is that you have a container and not a maybe container.


This may be true for checked exceptions, but certainly not for unchecked exceptions.


All exceptions must be caught to satisfy lifetime invariants in multi-threaded contexts.


Of course. With unchecked exceptions, it's impossible to statically check that this is the case, so for system programming they are not an appropriate error handling mechanism.


> With unchecked exceptions, it's impossible to statically check that this is the case

Aren't checked exceptions just a hint to the programmer?

Also I don't see why static analyzers shouldn't be able to map out which functions throw which exceptions without having hints in the language itself, the information is in the code.

Lastly if the goal is as simple as catch all exceptions, can't you just enforce a catch all on the top level?


None of system programming can be statically checked, since it's mostly about I/O with weak memory models.


Caveat: we may be using different definitions of "system programming". The one I'm using is code that is fairly close to the system, i.e. will call into libc or into the kernel/libSystem/etc. as well as calling more-or-less directly into a bunch of system-specific .so/.dylib/.dll.

In my experience as a system developer, you need to invest some time into understanding the invariants expected/promised by the libraries you're calling, but many of them map nicely to static types. Of course, if you're implementing e.g. a IPC or RPC layer, you need to deserialize (and validate along the way) your inputs, but there are very few systems that do not need to do that regardless.


Code that uses libc is just normal code.


I don't disagree :)


Exceptions have zero cost when not thrown, apart from being included in the binary


A mere possibility of a function throwing an exception is a side effect that an optimiser can't ignore, and this inhibits many types of optimisations involving code motion. This is for example the main cost of bound checking in Rust, not the branches.


I always hear this, but these days entire Linux distros build with -fasynchronous-unwind-tables which has much more overhead than C++ exceptions, since now you have to provide an unwind path for every single instruction boundary. Even Chrome does it (for "accurate stack traces") when otherwise they're on the no exceptions camp. Using this option also completely subsumes the otherwise negligible cost of C++ exceptions, which only affects non-inlined function calls (which already limit code motion anyway).


Unless you care about performance. Exceptions have a performance benefit in that there's no cost if they are not thrown. With returning error values, every caller up the chain has to test the error condition.

We've used exactly for this reason to improve performance, albeit this was a long time ago.


It was obviously a click-bait as half-way through the article it was clearly written to promote Val, a language practically no one has heard of as compared to Carbon and CPP2. The click-bait worked. Val looks like a real contender, but now I'm forced to ignore it because of the dishonest means by which I was introduced to it :)


Oh, another thing. There is nothing about Val that places it as a successor of C++ in any way. A C++ successor language starts by supporting interoperability with C++, at the source level. It didn't even attempt to do this.

This just looks like someone who likes Val wants it to be considered as a successor to the language, and not be overshadowed by CPP2 and Carbon.

My absolute favorite part of the article, which I'm going to steal in the future, is the disclaimer that translates any future failure of Val to a success: "if Val dies as a programming language but all its ideas are incorporated in C++, then I will be delighted."

pahleeze


I don't think your last paragraph is a fair criticism. That's more or less how PL research is conducted.A feature of a PL is successful if it escapes the lab, not if the research language it was implemented in gets popular in industry. The latter is quite rare while the former just takes a long time, which is why industry lags academia so much.


Why would a proposed successor language require compatibility with c++ source? Logically a successor is one in which you write future projects in not one which all existing code must compile now else you'd be stuck with the same tech in 2100 as 1980.


You explicitly don’t need direct source level interoperability with C++. The language just has to become the primary choice in the domains C++ used to be one. Rust appears to fit this definition quite well.


I really enjoyed reading about Austral, though it is still really early, that was discussed on HN this week: https://news.ycombinator.com/item?id=34168452


Agreed, Austral looks really cool, and the author's explanation of "what a linear type is and why we use them" was the clearest I've ever heard.

I was surprised it uses an Ada-like syntax -- I thought the standard wisdom for new languages was to either imitate C or Python ;-)


Very little discussion of swift and how the current work on C++ Interop is in many ways a peak into what carbon and others may be in several years.


I miss pony, rune, nim, crystal and many more. rune has an explicit 'replace C++ in prod' goal.


Hm I never heard of Rune, and I have heard of all the others (although this article gave me more info about Val):

https://github.com/google/rune (side project, not Google project)

This part is interesting, as I've found many benefits from having a layer of indirection between the app-level data definitions, and the C/C++ struct level. I recall that Jai used to advertise the SoA -> AoS transforms but that was many years ago.

It provides many of its features by deeply integrating the "DataDraw" tool into the primitives and constructs of the language. DataDraw is a code-generation tool that generates highly-optimized C code which outperforms e.g., the C++ STL given a declarative description of data-structures and relationships between them. For more information, see the DataDraw 3.0 Manual.

I've never heard of DataDraw either .. I wonder who uses it?


I can field questions folks might have on DataDraw. I'm not Bill, but I wrote the docs PR that recently overhauled the Rune README to highlight a lot of this interesting info about its use of the DD tool.

Another neat thing about DD -- the Rune compiler/grammar itself are written as DataDraw types. All user-defined classes/types are compiled under the hood into DD types.

One of the builtin things you can do is generate PostScript visualizations of them. Check this out:

https://github.com/google/rune/pull/33#issuecomment-13558283...


Indeed. And Nim has very good FFI with C++, including templates even.


Tip for blog authors: if you open your posts with apparently-sincere references to TIOBE, readers will immediately close the tab and conclude that your content is not worth reading. TIOBE is a joke, and has been for decades.


Also stating your affiliations and biases upfront rather than discreetely in the middle of the last section avoids readers assuming you're just shilling, or at least limits such whiplash.


I am very bullish on Carbon. Using C++ semantics will make switching and integration much easier.


Carbon got one obvious thing right: Culture. The most important thing Rust has that C++ doesn't is the right culture and so it was correct to focus there very early.

Unfortunately on the technical side it still feels as though the intent is to add safety, and IMNSHO that's not a good idea, you want to design safety in from principle, when it's layered on later the resulting joints and discontinuities always end up causing problems.

On the other hand I am even less confident Cpp2/ CppFront has the right idea. There's no sign that Herb Sutter groks the culture problem and he's not as focused on safety, preferring to see this in part as a vehicle to give all Herb's rejected C++ proposals a second chance. As WG21 convenor and as an otherwise important person in the C++ community Herb has opportunities for Cpp2 that don't exist for Carbon, but if this language isn't enormously safer than C++ I don't see the point.

Val certainly has the "from principle" thing. Like Rust there's a clear and articulable rationale for why this way to do things is safe. I don't know much about Val's culture, and of course like these other 2022 "successors" it's very young.


Its abstractions and memory safety aren't zero cost though. I do wish ffi between rust and c++ was easier, having just fought that battle again recently, but I'm not willing to give up performance over it.

I think it is telling that carbon came out of google, and rust came out of mozilla, but rust is what google is using to make android safer, and to create things like kataos.


kataos is a greenfield project. Carbon isn't being created for greenfield applications, it's being created to deal with maintenance of large existing c++ projects. The carbon repo publicly mentions that you are better off using rust otherwise.

Using rust in android is indeed interesting, but it's largely not hindered by legacy interop with c++ there either. Rust has struggled to overcome the challenges with c++ interop in chrome. I fear other c++ codebases face this same struggle.


That makes a lot of sense. It is hard enough on small codebases. I'm very familiar with some c++98 behemoths and I can't think of any good ways to go about it piecemeal. The interop truly is painful.


But this is exactly why we have C++ in the first place, you could port your C code to cfront without doing any effort.


Same, I was really hopeful that Rust was going to fill the void of a modern systems language (been following it since around 0.6), but judging by the direction it's been headed the since 1.0, state of libs and the ecosystem, and the general community sentiment, I've kinda lost hope at this point. To quote Carbon's readme, "barriers range from changes in the idiomatic design of software" (emphasis mine).

What took the cake for me though, was a post a couple weeks ago where people were griping about Go's error handling (if err != nil) when Rust is, at best, no better than Go (e.g if you want to add any context to your error), or just objectively worse off (? operator moves all your error handling logic to a separate, completely different part of your code base).


> people were griping about Go's error handling (if err != nil) when Rust is, at best, no better than Go

Have you used Rust in a non-trivial context recently? I have a few minor gripes about Rust's error handling, but Go's error handling isn't even in the same ballpark. Your complaint about the ? operator is moot, because you can easily use "match", "if let", "unwrap_or_else()", etc... to handle the errors in place. ? just gives you the convenience of mapping/propagating an error, which is desirable in some situations.


> "match", "if let", "unwrap_or_else()", etc...

all different ways to write `if err != nil`, none of them are better than the other

> mapping/propagating an error

which is literally what i said "moves all your error handling logic to a separate file"

have you written any long-term maintainable code? comments like yours is why i've lost hope in rust and the community; you don't even understand the challenges large code bases face and yet continue with baseless zealotry.


> all different ways to write `if err != nil`

Not hardly. This comment makes me think you don't have any significant experience with Rust.

> you don't even understand the challenges large code bases face

I've worked in more large code bases than I care to think about, including ones written in Go. In fact, one of the primary reasons I like Rust is because it gives me so much confidence when working in large codes. The compiler is ruthless! Data races won't compile. Forgot to handle a return value? Won't compile. Undefined behavior? Not likely. Async code that looks like sync code? You bet! I'm not hating on Go here; I think it's a fine language. And Rust isn't without its problems.

>continue with baseless zealotry

It's certainly not baseless, but I can see how it would come across as zealotry. I'm simply excited about Rust because it has made my life as a programmer much better.


There is a much safer and simpler language hiding inside of C++ waiting to be discovered. If anyone can help reform the language, it's Herb Sutter with his cpp2 syntax. It's very much an experimental hero project, but it's promising: https://www.youtube.com/watch?v=ELeZAKCN4tY


For me 2022 was the year of learning C++ (background in C# and Php) and I must say that after being intimidated by the language (extensive use of pointers, stack/heap compile/run time allocation and deallocation, templates) for years, when I really took a proper look and got some practice, it is an incredible language. The command an control you get by working with memory and the system on a lower level is amazing. I found it so much easier to learn (once I got the grasp of the cpp way of doing things) than something like Rust. All the higher level advanced concepts work and flow beautifully as you'd expect based on the lower level of the language... and the amount of "magic" is kept to minimum... you can just open std and look at what is being done and it all males sense. I understand why people want to move from C++ but as a newbie in this language, I find it amazing.


It's all sunny until valgrind / asan fail to reveal the source of an elusive "Uninitialised value was created by a stack allocation"


I work on a C++ project and actually sympathise with this sentiment. It's a massive strategic risk to our org, we pay a consistent tax in reduced productivity and stupid bugs, and I will advocate migrating away as soon as we have a suitable destination (Carbon looks promising).

But, I do also really enjoy writing C++! I'm optimistic that writing Carbon/Val/whatever ends up gaining traction in the industry will be just as good though.


Are you using legacy C++ or modern C++?


Quite modern! A few of the latest features we don't use yet but overall it's pretty up-to-date. Also no exceptions.


> I find it amazing.

This shall pass :)

The main beef with C++ is that it's just a mass of every single feature possible held together with some goo. It works, but there's no grace and elegance.


It is really only the fact that everyone writes C++ a little bit differently. From the outside this looks like C++ is a mess, but in reality, you can pick and choose what you like, and that's exactly what people do. You can be as safe or as unsafe as you need to be. That, of course, also opens the door for beginners to accidentally write unsafe code for a long time before they know all the footguns.

I think the real dent in C++ comes from wholesale improvements to languages by adding package management, one-liner built-in toolchains, built-in testing and build system. I could write paragraphs about why this is a good thing but we all know why.

CMake is making an effort in making it easy to fetch content for your build system, including Git repos, so there are paths to take today, but you need to learn a lot of separate things just to get started with C++. What are the chances that a beginners C++ tutorial gives you all the best practices in a way that a newer language does by default?

Learning C++ will probably look like the experience of using a web server with poor defaults, a strange configuration language, and thousands of different tutorials detailing a 20-year period of changes.

If I were to host content today I would not use Apache or nginx - I would probably start with Caddy and go from there.


> improvements to languages by adding package management, one-liner built-in toolchains, built-in testing and build system. I could write paragraphs about why this is a good thing but we all know why.

Not everybody shares this sentiment. I think it's great to be able to get yourself up and running quickly, and start dabbling with the language immediately, but I also think that at the same it is not so great because there is no "one size fits all purpose". I, for instance, happen to value the latter more than the former.


FWIW, I also wrote a lot of C++ for the first time last year, and found the lack of

> package management, one-liner built-in toolchains, built-in testing and build system

To be by far the least pleasant things about the language, especially C++20 where things like concepts are wonderful to use, but then you can't rely on having a toolchain that actually supports it. The most recent xcode/Apple Clang do not, so on Mac you need to find your lib(std)c++ elsewhere.

It'd be great if there were at least some way to make this easy, e.g. to create a virtual environment that sets all paths correctly. Or if the compiler from a toolchain would default to using its own headers and libraries instead of defaulting to the system's. (I assume they don't for a combination of historical reasons, and that you'd then you may be unable to link to system libraries due to things like mismatched std libraries).

Preferably, there would also be a GitHub Action to do it to make CI easier to set up.


> but then you can't rely on having a toolchain that actually supports it.

C++20 is still a bleeding edge so I'd advise you to pick C++17 or even C++14 if you want to have pleasant cross-platform coverage. That's what most companies with the goal of true cross-platform support will do. 3-year window time for compiler and library devs is really hard, especially given how many new features were introduced to C++20. And now consider how many different vendors there are ... https://en.cppreference.com/w/cpp/compiler_support numers them 12 but it doesn't count the ones from embedded space for example and probably some other are missing too, so 20? What we want to see on the paper hardly can match the much more complex reality.

> The most recent xcode/Apple Clang do not

It's a PITA, as usually found with Apple, especially considering that all other major compilers including GCC, Clang, MSVC and ICC work just fine: https://godbolt.org/z/Mj6ehq57v

> so on Mac you need to find your lib(std)c++ elsewhere.

This wouldn't work because concepts aren't a library feature but a compiler feature so perhaps your best bet would be to see if you can use vanilla clang or GCC to compile the code on Apple machines. I am not an expert here.


Generally it is the people coming from C or C++98 who write unsafe code. People picking up C++20 start out using it safely, and continue.


While C++ is the one with more features, the same can be said about Java 20, Python 3.11, C# 11, Haskell 2021,... when comparing against their version 1.0.


The problem of C++ is that its features are not just additive. In most languages, if you take two features and use them both, they add up. In C++, they may actively interfere with one another.


I gather you don't have much use of the latest versions of the languages I mentioned, in all of them I can think of examples that don't add up.


I’d be curious which Python features don’t work together?


One example sorted out with Python 3 would be old style classes and new style ones.

Other would be tracking down if attributes get dynamically changed via __dict__, __slots__, and how __slots__ interact with each other if multiple definitions happen to exist.

The way numeric division and remainder changed between Python 2 and 3.


I think you’re talking about a different issue here.

The person you replied to previously was talking about how certain new C++ features don’t work together. For example move with a lambda can cause UB.

You’re talking about changes to the language or if you have a conflict of definition.


All of that are changes/features regardless of which form they take.


No.

They’re talking about feature incompatibility within a single version.

You’re talking about feature differences between different versions.

Completely different things.


> In most languages, if you take two features and use them both, they add up.

Java nio vs. io

Java Reader vs. InputStream (not necessarily the interfaces themselves, but the redundancy between them eg should I use InputStream -> BufferInputStream -> InputStreamReader? Or InputStream -> InputStreamReader -> BufferedReader?)

Java float[] vs. ArrayList<Float> vs Vector<Float> vs. FloatVector vs. FloatBuffer - just how many ways can we describe "a linear allocation of numbers" in Java at this point? And they're still adding new ones!

So no, other languages don't just magically handle this more gracefully than C++ does. If a language is successful, it will either suffer from this or it'll stagnate - it's the natural consequence of preserving backwards compatibility while adding new features & capabilities.


Reader is for text/characters while InputStream is for bytes. An InputStream always lays at the heart of a Reader. Java's I/O may not be the most elegant of simplest, but it makes additive sense assuming you don't just have a bone to pick with the language.

The float examples are artifacts of pragmatism between plain old data types versus heap allocated objects. Though I do admit Vector<Float> is absolutely obsolete and should be avoided. The others have a clear purpose and raison d'être.

Now java.util.Date on the other hand…


> The others have a clear purpose and raison d'être.

Yes, but they don't compose cleanly together. As in, I can't just use FloatVector in all places that took a FloatBuffer previously or whatever. They aren't additive in an incremental migration sense, they are additive in the "these are just wholly unrelated APIs in their own wholly distinct silos" sense. Aka, the thing C++ is regularly slammed for doing even though it's additions aren't even this clunky.


Don't forget Date/Time classes. Absolute mess.


Something that .NET also missed to fix.


I doubt you can present even just a single example. All languages' features combine multiplicatively.


The rule of 0/3/5 literally exists because the default behaviours misinteract with any override of the rest.


Yeah, that doesn't qualify. It is all one feature: override copy or move semantics, and there is a prescribed way to do it. Rule of zero, of course, is the norm, and is an extremely beneficial interaction.

2nd try?


> Yeah, that doesn't qualify.

It absolutely does.

> It is all one feature: override copy or move semantics, and there is a prescribed way to do it.

And yet rather than force you to do it the language just goes into the weed and generates complete nonsense.


C++ gets special treatment for some reason when it comes to backwards compatibility and modern language features.

Oh no! we got threads and timers! C++ recognized the existence of file systems and regular expressions, woe I say! Polymorphic lambdas? I've got std::bind1st haha! Wait, not atomic! Anything but that!


Yes, but I think this is the ultimate fate of most successful languages. New languages start small and elegant, and everyone raves over them, then over time new features get added, and added ... and eventually the elegance and othogonality disappear. If this was a program rather than a language, now would be the time to refactor, but you can't because of backwards compatibility, so eventually you get to c++-like bloat. I actually like c++ -, but it's definitely out of control at this point.. although many people just ignore the bleeding edge features and basically code in something closer to c++11.

Languages really need to evolve to stay alive, but the evolution will eventually kill them!


... are there really people who go into their office job and think "oh this tool I'm using needs more grace or elegance?". If I had a carpentry company and my employees complained that their hammers and nails weren't elegant enough... I'd quickly look for new employees.


I agree. It is honestly a fine language IMO and while I have nothing against Rust, I also haven’t had much of an interest or incentive to use it. Yes, buffer overflow is a thing - it’s never been an issue in my experience. Use after free and memory leaks are infrequent, easy to fix, and caught early.

Golang is not a real replacement because 1. Golang core devs are too opinionated on random shit and make some things very hard to do without reinventing the wheel because “you shouldn’t do that” partially because it’s a corporate-owned language 2. GC. There are other minor things but those are the big ones, it’s still an excellent backend language but can’t replace C++.

Besides Rust everything else is a toy without stability and backwards compatibility and/or lack of libraries. Rust is fine, it’s just that the problems it tries to solve aren’t something that experienced C++ devs often struggle with.


> Besides Rust everything else is a toy without stability and backwards compatibility and/or lack of libraries. Rust is fine, it’s just that the problems it tries to solve aren’t something that experienced C++ devs often struggle with.

The single main reason for Rust's success is that this statement was proven wrong again, again and again. C/C++ devs kept repeating it, severe bugs keep getting discovered.


Yep, I totally agree. I took a long break from C++ after I learned the basics at university. I got into it again one and a half years ago when I needed really fast code and I was pleasantly surprised how easy it is. I'm exclusively using smartpointers and I very rarely run into scenarios where I have problems with memory leakage, etc.

Since I had written the prototype in Java and now interface with Javascript/Typescript, I'm really amazed how clean and well-reasoned my C++ side of the program is. So yeah, I'm also really happy with the state C++ is in right now.


Why was this comment flagged? C++ haters... really?


It doesn't deserve a flag, but the sunny optimism feels delusional. It's obvious that C++ has problems and there's obvious red flags like why someone like Linux Torvalds vehemently hates the entire language.

The poster addresses none of this and has a overly positive attitude towards C++. There's obvious nuance on this topic that the post fails to address and he instead just preaches to a biased choir. At least he admits he's a beginner.


[flagged]


All the Rust programmers I know are really happy to discuss the limitations and shortcomings of the language. It's possible that the problem you're pointing at comes from the word "reddit" rather than "rust" :)

Note: I've looked at the parent's comment history. The only one of them which was not a troll against Rust was a troll against the EU.

I'll stop feeding the troll.


[flagged]


Nobody is “getting personal” on this thread.


We use rust full time at my job, and griping about the language’s rough edges is a common pastime. The fanboys are loud and obnoxious but I can assure you they don’t make up the whole rust community.


Rust has its rough edges, but what people do is gripe about those in language development spaces. Then the rough edges get filed down in the next release trains, or sometimes the next language edition. In C++ they are totally unfixable.


> All the higher level advanced concepts work and flow beautifully as you'd expect

Are we talking about the same language? C++?


Also started learning C++ last year. Can concur.


What resources were you using to learn C++ that made "All the higher level advanced concepts work and flow beautifully"?


A tour of C++, third edition.


...is not 2023.


Sorry, I have only a sketchy idea of Zig: I understood it should have been a competitor? Why does the article writer in fact disregard it?

The authors of Val state:

> Our goals overlap substantially with that of Rust and other commendable efforts, such as Zig or Vale


Zig is usually seen as a 'better C', not as a 'better C++'. Whether this is actually the case is debatable though (some of Zig's features are 'leaking' into the territory of higher level languages such as C++ or Rust).


I feel this viewpoint is pedantic. Zero cost abstractions is the space we're looking at and from this perspective, C and C++ are in the same space.

If not then are we implying a successor to C++ needs to be Object Oriented? Because that's the main difference between C++ and C. Clearly the successor languages mentioned in the article do not have the same heavy OOP bias embedded into themselves as C++ and thus they all can be looked upon as successors to C as well.

I think the goal here is just to find a language that does the whole "zero cost abstraction" thing better than the status quo.


I'd say, RAII and the template system are the main difference between C and C++, not the OOP features. "Zero cost abstraction" is mainly just a welcome side effect of the compiler's optimization passes, and those work in C just as well as in C++ (but typical C++ code depends more heavily on compiler optimizations to clean up redundant "left-overs" from template resolution).


No. Zero cost abstraction was a term used by inventor of C++. It was one of the design principles. Not a side effect.

Source: https://www.stroustrup.com/ETAPS-corrected-draft.pdf

Search for "Zero-overhead abstraction mechanisms".

If you read it you will see Bjarnes goal was to provide the highest level of abstraction possible at "zero overhead". At the time he was creating the language, OOP was the newest and "highest" forms abstraction around... problems with it as an abstraction weren't as evident back then as they are now.

But yeah, I agree that you can make an argument for RAII and templating being the main differences over OOP. But Zero cost abstractions are not a side effect at all.


> RAII and the template system

Also function and operator overloading, often overlooked while being of critical importance.


There are so many more differences between C and C++. Templates, RAII, type inference, lambdas, etc. These are the abstractions that try to be zero cost; C has little abstraction overall so it doesn't claim "zero cost abstraction".

I don't think you need to be fully OOP to claim to be a C++ successor, but you definitely need to have decent zero cost abstractions, including a form of templates/generics. Having RAII might also be a requirement.


C++ is not object oriented; it's a multi-paradigm language that supports OO, generic, even some functional, etc. It is a schizophrenic beast that will go (and take you) in any direction you do and do not wish to go.

For example, the STL is everything with a sprinkling of OO. Generics and template metaprogramming are everywhere. Algorithms are defined as distinct for data structures IN ADDITION TO object methods. It is simultaneously elegant and monstrous.

Once the diamond inheritance pattern for iostreams emerged, there was a HARD turn away from OO in the standards body. C++ is literally the embodiment of every programming trend for the last 30 years.


hmm tell me about the diamond inheritance pattern for iostreams and that hard turn away from OO. Never heard of this and couldn't find any sources on an initial search. Got any links on it?


All the successor languages mentioned on the article have ways to provide methods and some form of polymorphism directly supported in the language, and not the OOP in C kind of kludge.


polymorphism is not an OOP concept.

https://en.wikipedia.org/wiki/Polymorphism_(computer_science...

You will see Wikipedia doesn't treat it as such. Examples are shown in functional languages and it is not hierarchically categorized as a exclusively a part of OOP.


From that link,

"In a 1985 paper, Peter Wegner and Luca Cardelli introduced the term inclusion polymorphism to model subtypes and inheritance,[2] citing Simula as the first programming language to implement it"

Polymorphism is part of what makes a language OOP capable, naturally it also exists in other paradigms.

We can also do other examples like discussing the semantics of closure and methods equivalency.

https://wiki.c2.com/?ClosuresAndObjectsAreEquivalent

Do you also want Simon Peyton Jones point of view on Haskell type classes and OOP relate to each other?

"Classes, Jim, but not as we know them"

https://www.microsoft.com/en-us/research/publication/classes...


We can keep the discussion going but I don't see why?

Polymorphism is not part of OOP it's much more universal then that. It can be used by OOP the same way numbers can be used as in OOP programming. But numbers are not hierarchically under the umbrella of OOP.

I mean what is the point you're trying to make here? That polymorphism is fundamentally and categorically OOP? Are you saying that anything that uses polymorphism (which is almost every language under the sun) is OOP?

>Do you also want Simon Peyton Jones point of view on Haskell type classes and OOP relate to each other?

No? Wouldn't that be a completely different topic?


All worthy candidates to successor languages to C++ also support OOP in some form, that is the point.

Not the OOP from all I know is Java, rather the OOP from SIGPLAN, ECOOP, IEEE, ACM papers, which happen to have polymorphism as one of the common features across all variations of what can be considered as an OOP capable language.


I think some people believe C++ to be a mistake - if such people were to develop a new langauge they might take issue with framing the new language as a better mistake.


I don’t know that I’ve heard anyone claim that C++ itself was a mistake, although there have obviously been many (acknowledged) mistakes made with C++.

All things considered, for the time, given the paradigms we understood to be effective, and for the value within the design space, C++ was about as close to excellence as we could’ve hoped for. It didn’t replace C, nor was it meant to, but for its purpose it was a great success.


No. C++ being a mistake is a valid argument. There are enough smart people claiming this that you can't dismiss it (Linus Torvalds for example). While the argument is not definitive it is valid enough that it must be considered.

One thing I highly disagree with in your statement is this: " C++ was about as close to excellence as we could’ve hoped for."

In my opinion, C++ is nowhere close to excellence. Best available option for certain applications is more inline with my opinion of it.


> There are enough smart people claiming this that you can't dismiss it (Linus Torvalds for example)

Torvald's anti-C++ rant was full of factual errors when he wrote it and it hasn't improved with age.

If you want to point to "smart people claiming C++ was a mistake" you need to actually use people who actually know & have experience with C++ in the first place - which isn't Torvalds.

There's also other smart people, like John Carmack, who have been converted from C to C++ and now prefer the later especially for working in a team environment. C++ is far from perfect, but there's also pretty strong arguments that it does still represent an improvement over C on average. Yet (almost) nobody argues that C was a mistake.


>There's also other smart people, like John Carmack, who have been converted from C to C++ and now prefer the later especially for working in a team environment. C++ is far from perfect, but there's also pretty strong arguments that it does still represent an improvement over C on average. Yet (almost) nobody argues that C was a mistake.

Agreed. Here is Wikipedia article on criticism of C++. https://en.wikipedia.org/wiki/Criticism_of_C%2B%2B

The second sentence has a short list of some of the smart people who don't like it: "Among the critics have been: Robert Pike, Joshua Bloch, Linus Torvalds, Donald Knuth, Richard Stallman, and Ken Thompson". I'll leave it up to the reader to research why they criticize C++ (not C). However in the references there's a good article from Rob Pike:

https://commandcenter.blogspot.com/2012/06/less-is-exponenti...


Being critical of something does not at all mean they don't like it or, more significantly, think it's a mistake.

Rob Pike's post there, for example, absolutely never comes anywhere close to calling C++ "a mistake."


I don’t understand. For what reason would Pike and Thompson have created Go if they had no criticism of C?


> One thing I highly disagree with in your statement is this: " C++ was about as close to excellence as we could’ve hoped for."

It’s fine to disagree, but it seems mildly disingenuous to omit all the qualifiers I (very deliberately and intentionally) prefaced that statement with when you do so. In that sense, you’re actually disagreeing with an opinion I don’t actually hold.


I omitted out of brevity. But ok, I'll put it in:

>All things considered, for the time, given the paradigms we understood to be effective, and for the value within the design space, C++ was about as close to excellence as we could’ve hoped for. It didn’t replace C, nor was it meant to, but for its purpose it was a great success.

I disagree with this entire statement. It's not even close to excellence even given all the deliberate and intentional qualifiers you put around it.

I believe that when I re-clarify my comment in this context it becomes clear that I am in totality disagreeing with your statement.

I think C++ became popular due to luck in the same way javascript came to dominate the frontend. I believe there could've been a number of possible better alternative outcomes if luck would have been different.


Quoting "celebrities" as if their word is infallible for some reason is just silly.

Torvalds has ranted about many things he knows nothing about and this is no exception. His opinions belong in the trash bin and should be ignored, not used as evidence.


The 'Mutable value semantics' section reads like a carbon copy of the Rust borrow checker. Is it just me or is val just ripping it off completely? I'm honestly not sure of this part so if I'm wrong someone can let me know.

Other than that, the whole title of the article is manipulative. It's not a fair and balanced examination of all the possible successors of C++. The author admits he works on Val, and the article is obviously promoting Val as superior to other alternatives.


What's the difference between Val and Vale? Besides that one letter, of course. There's also Vala which is its own thing, but also in the "not quite C++" space. Very confusing!


Main one is that Val comes from previous Swift developers, and is kind of sponsored by Adobe.


This one is the best introduction. Never heard of Val before until these last few days.


AFAICT, they're both trying to achieve Rust's power with much more simplicity. Val uses mutable value semantics, which (afaict) is like using Rust's &mut and .clone() a lot. Vale uses a foundation of generational references, then region borrow checking to eliminate their overhead.

And it gets even better! There's VALE (Verified Assembly Language for Everest) and two Vlangs ;)


And the next year will be the year of

   THE Successor Language
And that's (of-course, hands down) zig. Thank me later. :-) Jokes aside, I really like where zig is headed. As a gambler, I am willing to bet on zig over val any time and twice on Mondays.


Zig is really nice language with some good concepts. However it is not a successor to C++.


Unless there is significant reason to avoid garbage collection (for the vast majority of software there isn’t) then Crystal is the best C++ replacement out there. Its Ruby syntax is easy to read and write, its OOP model derives from Smalltalk (via Ruby). And it has easy interop with C++ code.


Aren't users who are okay with GC already not using C++?


A lot of C++ software out there could afford to use a GC.


What’s wrong with RAII?


Unless you use shared_ptr for all heap allocations (and are also careful about references to stack-allocated data) it allows for uaf bugs. And at that point you are probably better off with a GC since you've got all of the problems with a reference-counting GC but none of the opportunities for optimization.


Modern C++ provides enough nuts and bolts to avoid explicit allocations. My current C++ project (backend business server) only has single one. The rest is handled happily by RAII (what a dumb name). I could eliminate that single explicit allocation as well but I think I could manage single occurrence ;).

Also It would be really dumb to use shared_ptr for every case of allocation


"I only every allocate things on the stack" is not sufficient to have safe lifetime access. I've seen enough cases of some stack-allocated string getting passed as a string_view and then blowing up because somebody stored it.

"I have a complete tree of object ownership with a single allocated root" is also a mess for a different reason - you literally never delete anything. Either you have a very very strange application or you are being massively wasteful with memory pressure.

It would be really dumb to use shared_ptr for every allocation. But it is the only way in C++ to systematically ensure that you never have a uaf (I guess you could also have a custom allocator and a custom pointer type that does a null check on every dereference - but now you are paying a major performance cost for a lock and a branch on every dereference). That's why I mention the benefit of a GC. You can get safe lifetime access without shared_ptr everywhere (or going all the way to where Rust went and demanding very explicit lifetime annotations for the compiler to use).


>"I've seen enough cases of some stack-allocated string getting passed as a string_view and then blowing up because somebody stored it ..."

Shortest answer - I do not get scared into using particular tools just because they're "safe" and in practice I've never had to deal with the problems you mentioned in my products. There are also various tools that catch these kinds of errors for C++ and other languages. I do use those them tools ;)


A major part of my career has been to write these tools. There's a reason why none of them are sound. I do not believe that modern C++ and the surrounding tool ecosystem is remotely capable of preventing UAF bugs in complex applications, which is a big problem for many applications given the potential consequences of incorrect memory access.

You might belong to the rarified class of uber-developers who can write bug-free C++ applications, but if that class of people exists it is a small one.


>"A major part of my career has been to write these tools. There's a reason why none of them are sound."

This is an interesting take coming from a person who writes the tools. Obviously you know way more about it. I do understand that if I for example link ASAN and it does not show errors for a while it is not a guarantee. The software just might not hit that error path. But I am a practical man running my own company and can not be a perfectionist. If software passes my own test, analyzers and other tools do not report anything suspicious and I have no bug reports from a customer I can sleep well and count my money ;)

>"You might belong to the rarified class of uber-developers who can write bug-free C++ applications"

I absolutely do not write bug free applications but I tend to have very few to begin with and I do eradicate them pretty fast. Some of my stable releases run for years without any bug related complaints. Not sure if that qualifies for "uber". I am however very good architect and deliver solutions in very diverse areas. Desktop, multimedia, enterprise backends, middleware, device control firmware etc. Also understand electronics.


I'm on the static side. The sanitizers are indeed great tools but they are mostly too performance-costly to run in production environments, so they just exist for your tests and maybe for some small percentage of your production environment. Deployed in this manner, it isn't going to catch everything.

I think that asan is also theoretically unsound (I don't know its innards well enough) based on how it actually tags pages but in practice this isn't super relevant. The question is whether you actually execute the problematic sequence in some build that has asan enabled.

> Not sure if that qualifies for "uber".

Writing C++ code with minimal memory errors would qualify as "uber", given the data available from both academic and industrial research on C++ application development. The data is very clear - real applications run into these issues with frequency even when they consistently use modern smart pointers.


is your software actively attacked like major browsers/operating systems?


Current project (business backend server in C++) that would qualify for attack sits behind NGINX. Looking at the logs I see all kinds of attempts related to Wordpress, PHP, etc. etc. Also we use 3rd party protection on production before NGINX.

Since the server exposes proprietary JSON API with what I would say extreme validation before trying to actually do anything nothing that seems to be illegal gets in.

I mean it is not 100% guarantee as nothing else in our lives but so far nothing extraordinary (keeping my fingers crossed).


if you see it in the logs, that seems like bots looking for low hanging fruit. what about focused attacks by sophisticated adversaries? here, a heap buffer overflow in a c++ library is a major part of enabling rce https://en.wikipedia.org/wiki/Xpdf


shared_ptr use is very rare in good modern C++. Uaf bugs are much rarer. Most people using modern C++ never have any, see any, or risk any. So, no, they would not be better off with GC.


> shared_ptr use is very rare in good modern C++

That's true. But if you do the preferred thing of using unique_ptr then you still happily access memory beyond the lifetime of an object via some bug.

> Uaf bugs are much rarer.

This is not true. You can go look at CVEs for large projects like Chrome that have teams of people trying to weed out these kinds of issues and still see UAFs where absolutely nothing is allocated with "new."


That's Google for you.

You cannot access memory you have no pointer to, not even accidentally.


"Never hand out non-owning pointers or references" is not, in my opinion, an acceptable design constraint for a tremendous number of systems.


Hand them out freely. Don't store them. Likewise, string_view. The go down the call chain, not up.


Basically if your C++ function takes a pointer you need to write one for unique_ptr, or raw pointer (documenting the ownership).

With a GC, there is a global owner so you don't have to think about ownership for everything, this breaks down for some resource release (maybe half of objects) which is not necessarily easier. But typically GC languages also allow you to use RAII, though maybe not as consistently as C++.

What kind of programs can have unclear ownership? Basically everything that doesn't need to be high quality.


Unlike GC, RAII is “eager” - which has performance implications.


RAII can push resources to a collection that is disposed later


It is not a solution for all cases.


Care to give cases where it is not a solution?


- Some graph structures. The shared_ptr does not release circular data structures and has a lot of memory overhead.

- Memory managing of concurrent containers. Lock-free algorithms requires GC or the hazard pointers and these are basically based on GC.

- You often need the GC engine when you implementing or integrating another language.


RAII amounts to reference counting which means you need to handle circular references yourself. A garbage collector should handle them automatically. In addition to memory, GC's can catch other unused resources like unclosed file handles.


Arena allocation is useful when many objects have the same known lifetime. An example from games is per-frame allocations. All objects needed only for the frame being rendered are carved out of a large chunk of preallocated memory. The chunk is reused every frame instead of being freed back to the system allocator. In addition to the performance benefits, this results in less memory fragmentation.


nothing


I can think of a few cases in which GC would really simplify things, both in C++ and in Rust. I want to do some serious D and Go (and Crystal?) some day to see how it changes life.


Crystal is interesting, but lack of incremental compilation seems like it forces a fundamental limit on how large the projects you're working on should be. Am I wrong here?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: