Hacker News new | past | comments | ask | show | jobs | submit login

> Root cause

> Traditional C++ exceptions have two main problems:

> 1) the exceptions are allocated in dynamic memory because of inheritance and because of non-local constructs like std::current_exception. This prevents basic optimizations like transforming a throw into a goto, because other parts of the program should be able to see that dynamically allocated exception object. And it causes problems with throwing exceptions in out-of-memory situations.

> 2) exception unwinding is effectively single-threaded, because the table driven unwinder logic used by modern C++ compilers grabs a global mutex to protect the tables from concurrent changes. This has disastrous consequences for high core counts and makes exceptions nearly unusable on such machines.

That's really interesting. I've become very anti-exceptions in recent years far various much-discussed reasons (eg hard to follow, false economy, difficult if not impossible to write threadsafe C++ code in particular, use of exceptions as flow control is an anti-pattern).

One of the porposals is a value-or-error type object, which is basically what Rust has. I really like Rust's enums and match expressions.

It seems so difficult to make changes like this to C++ at this point, at what point do you just have to start again?




I'm relatively new to C++ and I found absl::StatusOr<T> and its helper macros to be extraordinarily pleasant.


Yep, these are used extensively at Google (which is where the abseil library came from, and which is famously anti-cpp-exceptions) and they work very well. If I somehow found myself writing a new C++ project I'd probably reach for abseil (and some of the other parts of the Google toolchain: GoogleTest for testing, bazel for builds).


It has helper macros now? I liked StatusOr but had to write my own helper macros for it at the time.



std::expected doesn't require language changes, it's a library type. If anything it shows how C++ is multi-paradigm


This is the great strength and weakness of C++. Increasingly the answer to C++'s rough edges is "We don't do things that way anymore. Everyone does X now", where X is the hot new thing. RAII is the best example I can think of, where some people insist that no one would ever use the "new" and "delete" keywords anymore. Except for all the C++ devs that do, and all the C++ code that exists that does and must be maintained.

It leads to the current situation where you have C++ "the language" which is everything, and then C++ "the subset that everyone uses" where that subset constantly changes with time and development context.


I think I agree with the sentiment but not the example, no one seriously advocates for no-new/no-delete (collections must still be written, somewhere) but rather that new/delete are generally code smell and there are idioms that can help isolate bugs. Part of maintaining old code is updating to those idioms.

But yea this kind of thing hit me recently on the interview circuit. I wrote some correct C++ (in that it was correct for the idioms in vogue when I last wrote C++ regularly for money) but I got feedback I wasn't as senior as they hoped due to my (lack of) C++ knowledge. Part of that was a shitty interviewer, but it's also just a fundamental part of the language. If you leave for a few years or try and change shops you find that everything under you has been removed or a completely different subset of the language is being used elsewhere. The complete lack of an ecosystem just reinforces that.


To be fair I imagine the same happening to someone coming to a Java interview still writing Java 8, or writing C# as if .NET Framework 4.8 is the latest version (C# 7.3).

It doesn't happen only in C++ circles.


I kinda look to when the thing finally stabilizes as a sign as to how bad the problem was. For instance, Javascript front end was a nightmare for a long time, but it seems to have finally stabilized into a reasonable stable configuration with a couple of winners, some minor specialized choices, and the endless churn is now a minor sideshow instead of something that is changing the default choice every six months. There was a bad problem there, but it seems to have been satisfactorily conquered for now. (I expect as static typing creeps ever more deeply into the JS ecosystem that at some point that may cross a critical point and cause some more churn, but at least for now things seem more stable.)

While C++'s churn frequency seems to be higher than the Javascript front end churn frequency, as an outsider, it still seems like "best practices" on C++ are churning around 1.5-2 years, it's been happening for my entire career, and it's still happening. If I seem a bit unsympathetic to the claims that the problems are solved if you just write your C++ code this way now, it's because I first heard that in 1998 or so, for a set of common practices now considered laughably out of date, of course.

At some point it becomes more cost-effective to just "churn" on to Rust next time, because even though in that same time frame Rust is a younger language that was going through its early design iteration phase it still seems like it has settled into a lower-frequency churn rate lately for "best practices" than C++.

There's probably some interesting and deeply profound reason why C++ just can't seem to stabilize across what is approaching an entire human generation, but I'm nowhere near interested enough in learning it to actually learn the amount of C++ it would take to find it.


> it still seems like it has settled into a lower-frequency churn rate lately for "best practices" than C++.

Rust is still adding a ton of new language features, especially around async, compile time code evaluation and the type system (const generics, GAT/HKT, existential types etc.). We'll very likely see further developments in more areas next, e.g. to match C++ developments in parallel and heterogenous compute (GPU's and the like), or to add forms of proof-carrying code, etc.


A lot of that is not what I mean by "churn". What I mean by "churn" is changes in best practice. Python has been adding a lot of features, but with the possible exception of static typing support, most of them haven't made many changes to what constitutes best practices. They might make "nicer ways to write that code" but the old styles haven't been deemed wrong. async is also not exactly what I mean; this allows new code to be written that mostly couldn't before. This one is only a partial miss though since it did deprecate some older libraries, but those libraries weren't really deemed "the right answer" either.

C++ is constantly changing what a "best practice" is. The latest hotness from three-generations-ago churn is now considered broken.


C++ gets new features that provide a better way to solve common coding problems. This is not accidental, and not a problem: the new features were added because they offer that better way.

Failing to use the new feature is just failing to write code the best way that is available right now. In 2013 you had no choice but to do it the old way; but you don't have to anymore, because the language and std library have caught up with you.

Being responsive to the needs of its community of programmers is job one for a language committee. If new, better ways don't arise, your language has stagnated.


> This is not accidental, and not a problem

You're right it's not accidental, but the problem is that when you continually add new features you end up creating a confusing mess. Yes, the new features may respond to some need in the community, but by adding it you've also:

- Introduced possibly unforeseen issues because features are never added in isolation; they interact with one another, and the more features you have the harder it is to test them all.

- Created confusion because now all the old information on the internet is out of date

- Everyone needs to update their tooling to support the new features.

- Make it harder for new people to start learning the language

> Failing to use the new feature is just failing to write code the best way that is available right now. In 2013 you had no choice but to do it the old way; but you don't have to anymore, because the language and std library have caught up with you.

This is a nice idea but it doesn't reflect reality. If the C++ user survey [0] is to be believed, a full 67% of users were restricted from using the latest C++ at the time (either fully or certain features).

> Being responsive to the needs of its community of programmers is job one for a language committee.

This is true but it also must be balanced against mission scope creep and design considerations. Being everything to everyone is not design.

[0] https://isocpp.org/files/papers/CppDevSurvey-2018-02-summary...


You are very confused.

People not using the latest Standard are waiting for support within their particular environment, not because they want to stay with a less performant version of the language.


But they’re still not on the latest version…


People not using the latest Standard are waiting for support within their particular environment, not because they want to stay with a less performant version of the language.


But this is what you said:

> Failing to use the new feature is just failing to write code the best way that is available right now. In 2013 you had no choice but to do it the old way; but you don't have to anymore, because the language and std library have caught up with you.

What I hear you saying is that failing to use new features is a failure to write the best code available, and that before maybe you didn't have a choice but not anymore.

And yet, huge chunk of C++ devs are forbidden by their organizations from using various C++ features that have been added throughout the years. It's not just C++ 17 but it goes back even further. This is the whole point of TFA. So it's not just that people are "waiting" for tools to catch up, unless you have evidence of this.


Organizational inertia is a problem, but not a problem that a language or Standard can fix.

When an organizational logjam is broken, programmers can immediately switch over to the better, more modern way of coding enabled by the newer Standard they are allowed to use. If they were on C++98, and now they can use C++14, they are still better off than before, even if the current Standard is C++20: they will be able to code according to best practices for C++14, which are better than for C++98.


I'm not convinced they're comparable.

C++20 is adding a module system to C++. That's a major change to the compilation model. It is also adding concepts, a major change in the way templates are to be written. The very article we're commenting on is discussing how exceptions should possibly be replaced by another error report mechanism, several proposals are in flight for this. There's also the "destructive moves" proposals that have the potential to change a lot how we write types.

A telltale sign that these changes are major is that the entire std has to be "modularized" to support modules, and to be modified to support concepts. Similarly if exceptions are to be revised a large chunk of exception using functions from the std would need to be modified.

On the Rust side, I think the only change that has even a comparable impact is const generics (and maybe specialization).

Existential types and GAT will change how to express some traits (allowing a LendingIterator for example), but I don't expect they will affect a large portion of Rust's std.

Also of note is that the Rust changes come to add new systems orthogonal to the existing features (const generics fill an obvious void compared to C++, same with GAT and existential types where Rust's expressivity is limited in comparison with C++ atm). By contrast in C++, the module system comes to replace the headers, and a change to exceptions would replace the current exception system, creating churn.


> It is also adding concepts, a major change in the way templates are to be written.

Indeed, instead of crazy tricks with SFINAE and tag dispatch, templates can be much easily written.


That's probably correct, I did not use concepts at the moment (stuck in C++14 right now). I certainly have my beefs with both features (modules that are orthogonal to namespace, no standard way to find modules, new and exciting ways of committing ODR violations, generally a complicated module system with quirks when there is so much prior art on modules in other languages, concepts being structural and not nominal, concepts being a "lower bound" on behavior and not an "upper bound" (thus not eliminating duck typing)), but my larger point is the scope of these changes to the language, not their (purported) benefit that I'm by and large unable to assess right now.


Modules are mostly working on VC++, no quirks needed. In fact, I only use modules for my hobby coding now.


> it still seems like "best practices" on C++ are churning around 1.5-2 years, it's been happening for my entire career, and it's still happening

I have to disagree with this quite strongly. What "best practices" churn are you seeing? Your reference example of "don't use new & delete" (which I agree with) was a single best practice change that happened ~10 years ago. Similarly https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines is like 5-7 years old now, and afaik hasn't had any significant revisions?

The "churn" was really pre-C++11 to post-C++11. It was more like a python 3 moment than churn, other than it's taking a long, long time for code bases to catch up.


I think you're mostly right, but there is a bunch of fluctuation around just how much ugly template magic you're supposed to use vs. plain old imperative control flow (<algorithm>, ranges, I'm looking at you!)


> C++ just can't seem to stabilize

It's not like C++ is oscillating. It continues to improve, which is a good thing.


Adding more and more features isn't necessarily an "improvement".


Sure, but that in itself is also not an argument. The space of programming languages in general is moving forward and languages keep adding more (usually higher-level) features. C++ is mostly trying to keep up.


Fred Brooks put it this way:

  "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas."
IMO C++ does not have conceptual integrity.


A language that has stopped responding to the evolving needs of its user community is a dead language. C++ is not a dead language.

Your complaint is that it is not dead.


No, a language can stay current by evolving to fit its purpose. A good example of this is Matlab. Matlab is older than C++ and is still heavily used today, enough that it's one of the few remaining programming languages to support an actual business.

Matlab is not the same language it was in the 70s. But the core language is still largely about manipulating arrays and it does it well. It has evolved by adding functionality through toolboxes and creating new, innovative developer tooling. But it hasn't jumped on every PL bandwagon that has driven by over the last 50 years.


Matlab usage has been in decline for many years.


Matlab is one of the top 20 programming languages in the world still after 60 years according to the TIOBE index [0]. It has maintained a large user-base and achieved profitability over this time, not by adopting every PL trend that has come and gone, but by adapting to new developments while staying focused on its essence as a language. It proves you can stay current without doing what C++ is doing.

Matlab's usage is down a bit since a peak at 2017, but over the last 20 years it's up over 300%, and it's done so by being a for-profit language. Mathworks is a billion dollar company, which is quite an achievement in the PL space in 2022.

Meanwhile C++ usage is up over the last couple years but the long term trend has been a steady decline over the last 20 years [1]. From 14% to a low of 4%, now back up to around 8%.

[0] https://www.tiobe.com/tiobe-index/matlab/

[1] https://www.tiobe.com/tiobe-index/cplusplus/


Citing TIOBE instantly demonstrates a fatally bankrupt argument: changes in TIOBE ratings have essentially nothing to do with actual usage, or with anything else quantifiable.

TIOBE is statistical noise. You would equally meaningfully cite your tea leaves, or crows flying overhead.


Okay, well then I suppose you have a better citation for your unsupported assertion that Matlab has been in decline for many years. I've backed up most of my assertions with citations, I think it's time you brought some sources to the discussion. What is your basis for anything that you've been saying here?

I mean, if TIOBE was really statistical noise as your claim, there wouldn't be clear trends in industry reflected in the data, like the rise of Python in the last few years. And yet we see it, so clearly it's measuring something.


I read OP’s complaint as “it is no coherent”


That is code for "it is evolving and I cannot be bothered to keep up".

All languages that are actually useful evolve. They get features that other people need, and you don't, just yet. Some of those features do turn out not to be perfect, because they are made by humans.

Comparing an old language to a new language, the new language will have many fewer of those both because it leaves many old things behind, and because it gets benefit of hindsight for the rest. Give it time, and it will accumulate "incoherence" of its own; the faster it evolves, the faster that happens.

The alternative is for a language not to be used. Then, it can stay pristine and "coherent", and not useful.

This is the human condition.


> "it is evolving and I cannot be bothered to keep up"

No, I actually teach C++ and have been keeping up with it for decades. It was the second language I learned in 1994 and I still code in it professionally today.

> Give it time, and it will accumulate "incoherence" of its own; the faster it evolves, the faster that happens.

Like I pointed out with Matlab, it’s a much older language compared to c++ yet is mostly coherent, far more than c++. This has less to do about C++’s age and the March of time or the human condition. Otherwise more old languages would be as incoherent as c++ yet that’s not the case.


If you teach C++, I can only feel sorry for your students, having such a confused teacher.


Please leave the personal attacks out of this, thanks. I've said nothing personally against you and yet you've turned to calling into question my profession rather than the points I've raised. I understand maybe it may feel like I'm attacking you personally as I'm criticizing a language which I gather you are very fond of, but criticizing C++ is not criticizing you, and I would appreciate if you show me the same respect I've shown you. That's not what this site is for, and if you want to engage in that kind of back and forth I'd kindly decline.

My students give me high marks, my department (which includes faculty who have contributed to C++ spec over the years) is satisfied with my teaching, and I graduate students that go on to work at top companies and research labs around the world. I'm doing my job just fine, let's stick to talking about C++.


I apologize. I was out of line to write that.


Adding features from literally every other language, just to create a mess - isn't really a selling point.

Right now, to learn C++ in a generic way, you need to learn pretty much all of programming paradigms and all of their variations - which is not a thing I would consider a plus.


>Right now, to learn C++ in a generic way, you need to learn pretty much all of programming paradigms and all of their variations - which is not a thing I would consider a plus.

I disagree, I would recommend learning enough where the tool starts providing value to YOU for YOUR problem domain and then pause/resume as needed. The main purpose of any programming language is to be productive in it and use it to solve a problem. In my opinion, there is no reason to learn more than you need about C++ unless you were a compiler author, on the C++ standards committee, or something similar.


"Generic," maybe, but in practice many people learn C++ as one of, as it were, two languages - one for application code developers and the other, for those who build libraries and provide APIs.


One of the reasons why I try to avoid C++ is that it's an unopinionated multi paradigm kitchen sink language.

There are great uses and great features, but there are so many of them and everyone has their own opinions.... Even in this thread there's a clear subset of people who "adore" C++ exceptions


So interesting, to me a language being opinionated is the main reason to put it on the do-not-use bin. I strongly believe that the best way to develop is through embedded domain-specific languages adapted to individual problems, and opinionated languages are always way too limiting regarding that.


Your statement contradicts itself - you're against optionanted languages, but insert an opinionated language(DSL) into an unopinionated language.


No contradiction.

It is not the job of a general-purpose language to insert its own opinions. That is the system designer's job. Fighting with the language's opinions is a recipe for failure.


When it's an embedded dsl you always have the escape hatch of the entire host language - and it is not shocking to use it: being in an eDSL does not mean that you have to agree to it religiously, it's always a case-by-case engineering tradeoff.

Whereas when in an opinionated language and you cannot do what you want... Ugly hacks such as people using bash scripts, mustache templates, etc ... to preprocess their code to get what they want quickly happen.


C++ is opinionated, just not in the same way as other programming languages. That is its strength.

(Perhaps the clearest C++ opinion is "zero cost abstractions")


There's no such thing as an "unopinionated" kitchen sink language. Language features have all sorts of unforeseen interactions that must be handled somehow, and good high-level design is needed to ensure that the interactions are sensible.


C++ is definitely unopinionated. Its evolution follows the path dictated by necessity, not opinion.


The "necessity" is defined by the three foundational opinions of c++ :

- zero cost abstraction

- system language

- compatibility with C toolchain


We're in the context of such unforeseen interaction.(Ironic, isn't it)


My first contact with RAII was in 1993 with Turbo C++ 1.0, it is hardly a hot new thing.


That's not my point. The point is that this feature once was the hot new thing, and in the future there will be a hot new way to do the same thing in addition to all the old ways, because that's how C++ evolves.

And I would have to say the average C++ dev did not know about RAII in 1993.


Destructors have been in C++ since the mid-80s. Anybody who did not know about RAII in 1993 did not know any C++.


For the sake of pedantry only: I don't think we called it RAII in 1993-1996, but the technique was in use in that period, though it wasn't standardized in any way. IIRC, Mac developers would have been widely exposed to it by Metrowerks PowerPlant during that time.


>It leads to the current situation where you have C++ "the language" which is everything, and then C++ "the subset that everyone uses" where that subset constantly changes with time and development context.

But what exactly is wrong with that? I don't quite understand your argument here..


How do you get memory without new nowadays?


They’re probably referring to preferring std::make_unique or std::make_shared to bare new/delete. Using either of the former makes the ownership semantics clear and avoids the need to remember to call delete at the appropriate time.


As well as smart pointers as others have mentioned, standard library containers (other than smart pointers) are another way to handle dynamic allocations in certain situations. The vector container is probably the best example here, it alone provides massive safety and usability benefits over using new or malloc directly!


make_unique, etc.


I've been using `expected`, i.e. value-or-error type, for a while in C++ and it works just fine, but the article shows it has some noticeable overhead for the `fib` workload for instance. Not sure if the Rust implementation has a different design to make it perform better though.


> Not sure if the Rust implementation has a different design to make it perform better though.

Prolly not, I expect the issue comes from the increase in branches since a value-based error reporting has to branch on every function return. Even if the branch is predictible, it’s not free.

And fib() would be a worst-case scenario as it does very little per-call, the constant per-call overhead would be rather major.


It's also worth noting that Rust does also have stack-unwinding error propagation, in the form of `panic`/`catch_unwind`, which can be used as a less-ergonomic optimization in situations like this. Result types like this also don't color the function, since you can just explicitly panic, which would be inlined at the call site and show similar performance to C++ exceptions.


This is very much non-idiomatic. Panics are not intended as “application” error reporting, but rather as “programming” error.

The intended use-case of catch_unwind is to protect the wider program e.g. avoid breaking a threadpool worker or a scheduler on panic, or transmit the information cross threads or to collation tools like sentry.


Using up scarce branch-prediction slots is a good way to make your program unoptimizable. Time wasted because you ran out will not show up anywhere localized on your profile. (Likewise blowing any other cache.)


Using up BTB slots is an interesting problem but in practice doesn't seem to be a big issue. If it was, ISAs would use things like hinted branches but instead they've been taking them away. Code size is more important but hot/cold splitting can help there.

A problem with using exceptions instead is they defeat the return address prediction by unwinding the stack.


That hinted branches are not useful tells us nothing about the importance of branch predictor footprint. When hinting branches gives you a bigger L1 cache footprint, it has a high cost. Compilers nowadays use code motion to implement branch hinting, which does not burn L1 cache. (Maybe code motion is what you mean by "hot/cold splitting"?)

Anyway the hint we really need, no ISA has: "do not predict this branch". (We approximate that with constructs that generate a "cmov" instruction, which anyway is not going away.)

How does using exceptions defeat return address prediction? You are explicitly not returning, so any prediction would be wrong anyway. In the common case, you do return, and the predictor works fine.


> When hinting branches gives you a bigger L1 cache footprint, it has a high cost.

It was the same size on PPC, and on x86 using recommended branch directions (but not prefixes).

> Compilers nowadays use code motion to implement branch hinting, which does not burn L1 cache. (Maybe code motion is what you mean by "hot/cold splitting"?)

Hot/cold splitting is not just sinking unlikely basic blocks, it's when you move them to the end of the program entirely.

That doesn't hint branches anymore, though; Intel hasn't recommended any particular branch layout since 2006.

> How does using exceptions defeat return address prediction? You are explicitly not returning, so any prediction would be wrong anyway.

Anything that never returns is a mispredict there; most things return. What it does instead (read the DWARF tables, find the catch block, indirect jump) is harder to predict too since it has a lot of dependent memory reads.


It suffices, for cache footprint, for cold code to be on a different cache line, maybe 64 bytes away. For virtual memory footprint, being on another page suffices, ~4k away. Nothing benefits from being at the "end of the program".

Machines do still charge an extra cycle for branches taken vs. not, so it matters whether you expect to take it.

Negligibly few things never return; most of those abort. Performance of those absolutely does not matter.

Why should anyone care about predicting the catch block a throw will land in after all the right destructor calls have finished? We have already established that throwing costs multiple L3 cache misses, if not actual page faults.


> Nothing benefits from being at the "end of the program".

It's not about the benefit, that's just the easiest way to implement it - put it in a different TEXT section and let the linker move it.

Although, there is a popular desktop ARM CPU with 16KB pages.

> Machines do still charge an extra cycle for branches taken vs. not, so it matters whether you expect to take it.

Current generation CPUs can issue one taken branch or 2 not-taken branches in ~1 cycle (although strangely Zen2 couldn't), but yes it is better to be not taken iff not mispredicted. (https://www.agner.org/optimize/instruction_tables.pdf)

> Negligibly few things never return; most of those abort. Performance of those absolutely does not matter.

Throwing an exception isn't a return, nor longjmp/green threads/whatever. Sometimes they're called abnormal or non-local returns, but according to your C++ compiler your throwing function can be `noreturn`.

Error path performance is important since there are situations like network I/O where errors aren't at all unexpected. If you're writing a program you can just special case your hotter error paths, but if you're designing the language/OS/CPU under it then you have to make harder decisions.

> Why should anyone care about predicting the catch block a throw will land in after all the right destructor calls have finished? We have already established that throwing costs multiple L3 cache misses, if not actual page faults.

More prediction is always better. The earlier you can issue a cache miss the earlier you get it back.

For instance, that popular desktop ARM CPU can issue 600+ instructions at once (according to Anandtech). That's a lot of unnecessary stalls if you mispredict.

And so its vendor has their own language, presumably compatible with it, which doesn't support exceptions.


More prediction is not, in fact, always better.

Specifically, a prediction is not better when it makes no difference. Then, it is worse, because it consumes a resource that would better be applied in a different place where it could make a difference.

Exceptions are the perfect example of a case where any prediction expenditure would be wasted; except predicting that no exception will be thrown. It is always better to predict no exception is thrown, because only the non-throwing case can benefit.

Running ahead and pre-computing results that would be thrown away if an exception is thrown is a pure win: With no exception, you are far ahead; with an exception, there is no useful work to be sped up, it is all just overhead anyway.

This is similar to a busy-wait: you are better off to have leaving the wait predicted, because that reduces your response latency, even though history predicts that you will continue waiting. This is why there is now a special busy-wait instruction that does not consume a branch prediction slot. (It also consumes no power, because it just sleeps until the cache line being watched shows an update.)


You don't have to start over with the whole language. Just use -fno-exceptions in your project and dictate the use of std::optional, or absl::StatusOr, or whatever your favorite variant return type may be. For the examples in the article, it may be perfectly fine to not support failure, to simply std::abort whenever the sqrt of a non-positive is requested and rename the function sqrt_or_die.


Except if that's incompatible with libraries you're using.

For the record, I like absl::StatusOr<>.


You also have to abandon operator new and STL unless you want to pretend they never fail.


Right, I use an allocator that just aborts. Imagining your program can recover from alloc failures has always struck me as fanciful, or at least out of the realm of my experience.


It's a legitimate thing in embedded and other memory-constrained circumstances, when you have something like a large cache and an allocation failure can trigger manual pruning or GC.


Just use the no_throw variant, no need to abandon new.


Regarding maintainability ("hard-to-follow"), I’ve become a big fan of Java’s checked exceptions (and also their causal chaining and adding of "suppressed" exceptions, which would be nice to have for C++ destructors). I effectively see them as a sum type together with the return type, just using different syntax. It’s an important reason why I stick to the language, because no other language has that kind of statically-typed exceptions.

As the article explains, the problems in C++ are more an ABI issue than a programming language issue (except for the by-reference vs. by-value semantics). You could implement exceptions internally by variant-like return values, for example, similar to how error passing is done in Rust, while still having it look like exceptions on the language level. It would be fun for future languages and runtimes to more easily be able to switch the underlying mechanism, or possibly to be able to use different implementation mechanisms for different parts of a program as needed.


Java's checked exceptions are generally regarded as a mistake. There's a reason no other languages has them, and newer JVM languages (Groovy, Clojure, Scala, Kotlin) treat all exceptions as runtime. Anders Hejlsberg (creator of Delphi, C# and Typescript) also has an excellent article on their problems [1]. In modern Java I see nearly only runtime exceptions used, especially because that's necessary for most Java 8+ lambda API's.

Especially when used as an error ADT they're awful because they mix everything from "you have to handle this 90% of the time" to "the universe physics just changed, sorry" into one construct. Much better to use something like Vavr's Try and explicitly propagate the error as a value.

[1] https://www.artima.com/articles/the-trouble-with-checked-exc...


Most of the software I write is designed to be fault-tolerant, and checked exceptions are fantastic way of detecting potential faults. The problem is that checking is baked into the exception definition instead of its usage.

If I declare "throws NullPointerException", then this should mean I want it to be a checked exception. This should force the caller to catch the exception, or declare throwing it, or to simply throw it out again without declaring it. This would effectively convert the checked exception into an unchecked exception.

Converting a checked exception to an unchecked exception is possible in Java, and I do it whenever it makes the most sense instead of wrapping the exception, which is just a mess. Unfortunately, there's no way to make an unchecked exception behave as if it was a checked exception. It's easier to reduce fault tolerant checks then to add more in.

Some might argue that converting an exception to be unchecked is a bad idea, but this sort of thing is done all the time in JVM languages that are designed to interoperate with Java. If I call a Scala library from Java, and that library is doing I/O, then IOException has now effectively become unchecked.


The distinction some Java programmers make (including myself) is to treat RuntimeExceptions as indicating interface contract violations (usually preconditions), or more generally, bugs. That is, whenever a RuntimeException occurs, it means that either the caller or the callee has a bug. When existing APIs use RuntimeExceptions to indicate any other error condition, they are wrapped/converted into a checked exception ASAP.

I understand your point about usage-dependend checking. However, I believe it is mistaken. Consider a call chain A -> B -> C -> D, where D throws a checked exception of type X, which C converts into an unchecked exception (still type X). At the same time, B also calls other methods that happen to throw a checked X, and thus B throws a checked X itself, which now, unbeknownst to B, also includes the X from D. B documents the semantic of its X exception, which may not fit the ones thrown by D. Now A catches X, believing the semantics as documented by B, but actually also catches X from D, which was concealed (made unchecked) by C. This breaks checked exceptions in their role as part of an interface contract.

The fact that unchecked exception may originate from arbitrarily deep in the call chain is also the reason why they are unsuitable for defining interface contracts, A function declaring certain semantics for a particular unchecked exception can't realistically ensure those semantics if any function it calls itself must be assumed to also throw exceptions of that type (because it is unchecked). Effectively, you can't safely document "exception X means condition Y" for your function if any nested calls may also throw X for unknown reasons.


Any exception thrown in response to a bug is a fundamental design failure.

At the point where it is evident the program has a bug, there is nothing known, anymore, about the state of the program. It means that data structures defining the state are inconsistent to a wholly unknown degree, and any further computation depending on or,worse, altering that state can do no better than compound the failure.

I always delete code I find that tries to recover from a bug. It is impossible to recover from a bug; the best that can be done is to bail out as fast as possible.


The problem that you outlined is a different one than whether or not exceptions are checked. The further an exception is thrown (call chain depth), the more context that gets lost. This can be fixed with more specialized exception types.

My example of declaring an NPE as checked isn't something I would actually encourage. It conveys very little context, and exceptions which extend RE should signal a bug in the code. I should have selected a better example.


The problem is strongly tied to whether exceptions are checked if you want to have the type system support you in checking and validating your reasoning about the contracts between callers and callees. With unchecked exceptions, unless you have extreme discipline and document exception types for every function, and take care to use dedicated exception types for all non-bug exceptions, effectively emulating what the type-checker does for you with checked exceptions; unless you do all that, it's a fool's errand in terms of ensuring (being able to prove the correctness of) your interface contracts.


Side stepping that you're using a runtime NullPointerException as an example, Java's checked exceptions are just not particularly good at what you want to do.

If you have a function that has an abnormal result as its API, it should have that as part of its return value, because returning as values is what you do with results. Checked exceptions in contrast don't compose. See for example this code:

   X myFunction(Y param) throws MyFailureException();

   List<X> transformed = list.stream()
     .map(e -> myFunction(e))
     .collect(toList());
This is not allowed, because a the lambda in map can't throw. Even if that was allowed map would need a generic "throws Exception" as its API, which would be awful. With checked exceptions the only possibilities you have is catch the exception locally at the point of calling the function or stop as soon as you encounter the failure and bubble it up.

Instead, if you make the part of the return value you can do whatever. I'll use Vavr Try as an example, but you can also do this in Java 17 with sealed interfaces or in a miriad other ways.

   Try<X> myFunction(Y param);

   List<Try<X>> transformed = list.stream()
     .map(e -> myFunction(e))
     .collect(toList());
Now you can still handle failure locally, but you can also check the whole list and make a decision based on that. Or you can propagate the results including failures because this is not the best place to handle them. That's what I mean with composes vs not composes.


I think the question is more on if the implementation of how the jvm does exceptions somehow less affected by core count?

That is, the checked part is just a language implementation, right? The jvm doesn't really make much of a distinction. (This is meant as a check to my assumption.)


Yes, that's right, as static type checks generally are. However, the C++ performance issues are unrelated to whether exceptions are statically and/or runtime checked. Furthermore, Java exceptions are not particularly efficient, in particular because they collect the current stack trace on creation by default, which is a relatively expensive operation.


I thought you could tune away the trace on creation behavior.

Regardless, I'd be interested in seeing if this is a performance bottleneck. I'd guess it is only relevant on dataset processing. Closer you are to a place that legitimately can toss to a user, more likely you are to not care?

That is, if the common case of an exception is to stop and ask for intervention, is this a concern at all?


> I thought you could tune away the trace on creation behavior.

You can when you implement your own exception type, but not in general (and doing so would break too many things).

Exceptions are thrown and caught quite frequently in Java for "expected" cases, for example when attempting to parse a number from a string which is not a valid number. It's generally not a performance problem, and stack trace collection is probably heavily optimized in the JVM. Nevertheless, it's certainly still a lot slower than C++ single-threaded exceptions. You have to realize that even a factor of 100 slower may be unnoticeable for many use cases, because so much else is going on in the application.


Quickly googling, I see the is an option for implicit exceptions to omit stack frame for fast throw. Seems finicky, though.

But, yes, I realize most of these are probably not noticable.


> There's a reason no other languages has them,

As always someone is wrong on Internet.

Java adopted a feature that was initially introduced in CLU, adopted by Mesa/Cedar, Modula-2+ and Modula-3, was being considered for ongoing ISO C++ standardization at the time.


Correct, but also pedantic and doesn't change the point. There's an implied "mainstream" or "currently serious contenders to start new projects in" adjective in "no other languages has them".


Pedantic is good, even C and C++ compilers support it.


"Java's checked exceptions are generally regarded as a mistake."

By people who do not want to believe that errors are part of a system's API and prefer to just write the happy path and let any exception kill the process. And who don't mind getting called at 2:00 am because a dependency buried deep in a subsystem threw an exception that you'd never heard of before.


> errors are part of a system's API

This is a really interesting and nuanced point here. The article linked above [1] talks a bit about it. The problem is, they both are and aren't part of the API in the strict sense.

In the sense that they specify a contractual behaviour they are part of the API of a function. But in the sense that they are something the caller should / needs to specifically care about, they sit in between. That is, in the vast majority of cases, the caller does not care specifically what exception occurred. Generally they want to clean up resources and pass the error up the chain. It is "exceptional" that a caller will react in a specific way to to a specific type of exception. So this is where Java goes wrong because it forces the fine grained exception handling into the client when the majority case (and preferred case generally) is the opposite. It makes you treat the minority case as the main case. There are ways to work around / deal with this but nearly all of them are bad. The article talks about some of the badness.

I do think it's interesting though that Rust has taken off and is generally admired for a very similar type of feature (compiler enforced memory safety). I am really curious how that will age, but so far it seems like it is holding up.

[1] https://www.artima.com/articles/the-trouble-with-checked-exc...


It is still far too early to say.


Errors are part of the system's API, but Java's checked exceptions aren't really that.

Case in point, the many checked exceptions in the Java standard library that are never, ever thrown. Ever. Such as ByteArrayOutputStream#close(). In fact the docs on that method even say it doesn't do anything, and won't throw an exception. But there's still a checked exception that you have to handle, because it came from the interface.

Which is part of why checked exceptions are a mistake. If Java wasn't so aggressively OOP, then maybe there's a good idea there. Like maybe checked exceptions in C++ would work, as you're not relying (as much) on inheritance to provide common functionality. But as soon as interfaces & anonymous types (eg, lambdas) enter the scene, it starts falling over.

Also in terms of API design, checked exceptions are quite limiting. Especially when working with asynchronous APIs, as checked exceptions are inherently coupled to synchronous calling conventions.

And there's also then the problem of not a whole lot of your code base is an "API surface" (hopefully anyway), and it's really vague how a middle layer should handle checked exceptions. Just propagate everything? But that's a maintenance disaster & leaks all sorts of implementation details. Just convert everything to a single type? Well now you can't catch specific errors as easily.


> Like maybe checked exceptions in C++ would work

https://en.cppreference.com/w/cpp/language/except_spec


> By people who do not want to believe that errors are part of a system's API…

The point was that you should be returning errors, not throwing them. Runtime exceptions (null reference, division by zero, out of memory, etc.) ought to indicate a fatal error in the (sub)program or runtime environment. You can trap these, and report them, but it's usually a mistake to try to case-match on them. Unlike errors, which are predictable, enumerable elements of the design, runtime exceptions should be treated as an open set.


I disagree with this. But, I'm also a fan of the condition system in Common Lisp.

That is, if the problem is likely one that needs operator/user intervention, the non local semantics of exceptions makes a ton of sense. Indeed, it is useful to have a central handler of "things went wrong" in ways that is cumbersome if every place is responsible for that.


If you read the article by Anders Hejlsberg, he's not arguing against centralized handling of exceptions—the handling of runtime exceptions is expected to be centralized near the main program loop. That, however, is a general-purpose handler which won't have much logic related to any particular kind of exception; it just reports what happened and moves on. You don't need checked exceptions for that.

The condition system in Common Lisp (which I am also a fan of BTW) is designed around dealing with conditions when they occur, whereas most of the alternatives focus on the aftermath. In particular, conditions don't unwind the stack before running their handlers, which makes it possible to correct the issue and continue, though handlers can naturally choose to perform non-local returns instead. More to the point, there is no requirement to annotate Common Lisp functions with the conditions they may raise, which makes them more akin to unchecked exceptions.


Fair. Sounds like you are more claiming that most functions would be better returning a result type, but some will be better with more?

I view this as I want my engine to mostly just work. It may need to indicate "check engine" sometimes, though. And that, by necessity, has to be a side channel?

I think that is my ultimate dream. I want functions to have a side channel to the user/operator that is not necessarily in the main flow path. At large, I lean on metrics for this. But sometimes there are options. How do you put those options in, without being a burden for the main case where they are not relevant?


> I want functions to have a side channel to the user/operator that is not necessarily in the main flow path.

That is the essence of the Common Lisp condition system, and you can get there in most languages with closures, or at least function pointers, and exceptions or some other non-local return mechanism using a combination of callback functions for the conditions and unchecked exceptions for the default, unhandled case. The key is that you don't try to catch the exceptions, except at the top level where they are simply reported to the user. Instead you register your condition handler as a callback function so that it will be invoked to resolve the issue without unwinding the stack. It helps to have variables with dynamically-scoped values for this, though you can work around the absence of first-class support as long as you have thread-local storage.

C++ actually uses this model for its out-of-memory handling. You can register a callback with std::set_new_handler() to be invoked if memory allocation with `operator new` fails; if it returns then allocation is retried, and only if there is no handler is an exception thrown. Unfortunately this approach didn't really catch on in other areas.


I'm not sure callbacks alone can be equivalent to a resumable conditions system. You really need full coroutines in the general case. Anyway, what you are proposing is more of a partial alternative to exceptions, since the caller has to be aware of what's 'handled' in advance, whereas conditions may additionally unwind up to a predefined restart point or fail up to the caller similar to a non-handled exception.


> I'm not sure callbacks alone can be equivalent to a resumable conditions system.

I agree, but I was not relying solely on callbacks. You do need some form of non-local return (such as exceptions or continuations) to implement the "resumable" aspect with a choice of restart points, in addition to the callbacks.

> You really need full coroutines in the general case.

I'm having a hard time imagining an error-handling scenario that would require the full power of coroutines—in particular the ability to jump back into the condition handler after a restart. In any case, most languages (even C, if you work at it) can express coroutines in some form or another.

> Anyway, what you are proposing is more of a partial alternative to exceptions, since the caller has to be aware of what's 'handled' in advance, whereas conditions may additionally unwind up to a predefined restart point or fail up to the caller similar to a non-handled exception.

Clearly there has been a breakdown in communication, as I though this was exactly what I described. The handler callbacks are "ambient environment" (i.e. per-thread variables with dynamically-scoped values) so there is no particular need for the caller to be aware of them unless it wishes to alter the handling of a particular condition. Restart points can be implemented by (ab)using unchecked exceptions for control flow to unwind the stack, or more cleanly via continuations if the language supports them.


The compiler makes them part of the API. And what a lot of people do is just throw in a bunch of blanket catches with empty code. Although some of this is server vs. desktop software - the article was about complex GUI apps, not long running servers. Tho I personally think long-running servers shouldn't use exceptions. Each call to something out of the running code stack frame should explicitly decide what to do on failure or "didn't hear back." That's how your server gets to be bullet proof (and by bullet proof, I don't mean "auto-restarts on unhandled exception.")




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: