The problem is that in many practical situations you don’t know which situation is “exceptional”.
If you write a jpeg-processing service, it’s intuitive to raise an exception on a malformed jpeg, but there’s no guarantee that only 1% of the jpegs users upload to the service are malformed.
In other words, we treat exceptions as exceptions from our code expects, not what’s statistically unlikely in the input space, which is in many cases impossible to predict with accuracy. (E.g., even if only 1% of your inputs are malformed over all time, on some Thursday you may be hit with 80% bad inputs, making the performance drop across the service unacceptable.)
And it's one for which exceptions are a perfect fit. Even if at the entry point of your "parseJpeg()" function you instead return a variant<result, error>, internally propagating that error via exceptions is the "correct" design. Otherwise you're littering the code with branches, which makes the expected common path slower.
As a bonus not only is using exceptions for this faster, it's also less error prone. There's much less risk of any intermediate function or helper abstraction failing to propagate an error code properly.
Well, it would be if exceptions weren't avoided like the plague in C++ because they have a, well, bad implementation, which can't be meaningfully fixed without ABI breakges (as the paper covers)
My experience in using exceptions (and restartable conditions) over the last 40 years is that exceptions are for things you don't have the ability or "knowledge" (i.e. state) to handle locally.
So a function that ingests a file and processes it may throw an exception if the file isn't found so that the UI can catch it and ask the user for an alternative filename (or to give up and not open a file at all).
If you're connecting to a remote machine and don't get a response, you might throw an exception because you don't know if the user typed the name wrong.
While if you are already talking to a machine and it stops responding it's reasonable to wait a moment and retry, as if could be a transient network brown-out which is something you can deal with on your own.
When something happens that violates your assumptions about your own program's behavior, throwing it into a state where it doesn't know what happens next. Kind of like a panic.
This would mean that attempting to open a file that doesn't exist shouldn't throw an exception. But that is exactly what it does in the standard libraries of many languages with exceptions.
It should be up to the application to throw or not, not a library. I write a system service. If it can't find the configuration file, it can't continue, so it throws an exception. If it can't open a file that contains state from a previous run (maybe because it's the first time it's running) that's fine, the program can run without it and thus, no exception.
The C++ standard IO library doesn't enable exceptions by default, but IIRC that's just a relic of the fact that it dates back to when C++ didn't have exceptions.
This is a bit of a stretch. The purpose of the service has to be processing jpegs where they are expected to be bad. Moreover, they have to be mostly bad; i.e. the service is specifically intended for finding rare good jpegs in a deluge of bad ones, as quickly as possible.
In a situtaion where are more bad jpegs than good ones, we still don't necessarily care that they cause the slow path, if the purpose of the service is doing meaningful processing with good jpegs.
If you write a jpeg-processing service, it’s intuitive to raise an exception on a malformed jpeg, but there’s no guarantee that only 1% of the jpegs users upload to the service are malformed.
In other words, we treat exceptions as exceptions from our code expects, not what’s statistically unlikely in the input space, which is in many cases impossible to predict with accuracy. (E.g., even if only 1% of your inputs are malformed over all time, on some Thursday you may be hit with 80% bad inputs, making the performance drop across the service unacceptable.)