
Real cost of C++ exception vs error-code checking - zeugma
http://lazarenko.me/tips-and-tricks/c-exception-handling-and-performance
======
pilif
I assume he was proving a point where main() of his full program using error
code checking was not in fact checking the return value of foo()

Even in simple example code like this you can forget a check. In this case
that result would be undefined if any call to devide failed.

I'd much rather have my program blow up with a readable stack trace pointing
to where it happened than it working with a basically random value and then
maybe blowing up somewhere totally unrelated or worse, destroying user data.

~~~
davidsiems
You can accomplish this by using asserts in your code.

You don't need exceptions to get a callstack and you can assert that values
are valid and force a crash / callstack dump when they're not.

On top of that, you can compile the asserts out for release builds if you're
confident they won't be hit.

~~~
dexen
Normally you turn assert() off for production buids, so it's not a robust
solution, as some error conditions may not get generated in testing.

The memory protection catches a lot of out-of-bounds memory references pretty
well, and if you enable core dumps, you can extract neat backtrace from the
core file (provided your routines fail-forward in case of invalid arguments).
Moreover, some compilers can be instructed to instrument your code & data,
including stack, with guard data, meant to trip process if it accesses wrong
memory region. GNU malloc does some guardians if you sest $MALLOC_CHECK_.

If you aren't worried about vendor lock-in, you can use GCC's
__attribute__((warn_unused_result)) [1]

\--

[1] [http://sourcefrog.net/weblog/software/languages/C/warn-
unuse...](http://sourcefrog.net/weblog/software/languages/C/warn-unused.html)

~~~
ryanpetrich
And if you are worried about vendor lock-in, you can hide it behind a #define

------
McP
I prefer Raymond Chen's take on exceptions:
[http://blogs.msdn.com/b/oldnewthing/archive/2005/01/14/35294...](http://blogs.msdn.com/b/oldnewthing/archive/2005/01/14/352949.aspx)

~~~
AshleysBrain
This reads to me like 'writing good code is hard, and writing even better code
is even harder'. With modern C++0x smart pointers (like unique_ptr) and RAII,
you can get very high performance and (more) easily exception-safe code. Maybe
a lot of C++ exception criticism comes from people still trying to code like C
in C++?

~~~
svlla
"modern C++" has changed meanings so many times in the past 10 years that it's
not funny anymore. there's always some "modern" solution in C++ that ends up
causing more problems, leading to further "modern" solutions that end up...
you get my point.

some folks have decided to get off the C++ feature treadmill and go back to,
well... getting things done with solid languages (e.g. C) instead of learning
about the latest C++ non-solutions to non-problems.

~~~
AshleysBrain
Hmm, well, I'm relatively young so I guess I missed all those broken promises
:P Still, I think the "trying to code like C in C++" point still stands.

~~~
jerf
A compressed version can be obtained simply by comparing the initial release
of C++ to modern C++, and recalling that the initial version of C++ itself
shipped with, well, pretty much the same set of promises that modern C++ ships
with.

~~~
dkarl
Do you mean when it was called C with Classes in 1979, or when they changed
the name to C++ in 1983? That's about three decades of change either way. A
book about the history and evolution of C++ came out in 1994, a year before
the first public release of Java. More time has passed since that book was
released than between the invention of C with Classes and the publication of
that book. When a language has been around for thirty years, it's hard to
perceive its rate of change correctly relative to other languages. It would be
best compared to Perl, which is only eight years younger, and which is also
still thriving. C++ is actually a pretty slow-moving language.

------
Quarrelsome
Nice article but as far as I'm concerned you don't need to _prove_ this as it
is a logical fallacy to start with.

If an exception is being thrown then something is wrong, if something isn't
wrong then you implemented your exceptions incorrectly as exceptions shouldn't
exist in normal program flow.

So to recap, you're writing a crap ton of more code just so you can return
your error code _slightly_ faster than it would take an exception. You're
optimising your failure cases, which (in the _vast_ majority of cases) is
UTTERLY ABSURD.

~~~
shin_lao
It's not slightly faster, it can be an order of magnitude faster.

Example: a listen loop which handles disconnections through exceptions. This
isn't stupid but it's not very efficient.

~~~
Quarrelsome
Why is it not stupid? If disconnections are part of normal application flow
then why would you use an exception?

You are correct, I was somewhat disingenuous with _slightly_ faster. It is
lots faster but lots faster in error cases, which from a philosophical angle
is still absurd.

As long as you use your exceptions for "bad shit" (uncommon error conditions
or completely unexpected failures or returns) then I still strongly believe
that the performance comparison is silly.

------
JoeAltmaier
Perhaps the worst problem with checking-vs-exceptions is, either solution
dominates your code structure, obscuring the algorithm logic.

The holy grail would be some method of ensuring the code cannot fail e.g.
weirdly constrained argument semantics. Thus separating algorithm from
constraints instead of shuffling them together on the page like a deck of
cards.

~~~
johnny531
If only c++ had some sort of static type system which could be leveraged to
provide compile-time checks...

But seriously, this is a large part of the power of c++'s type system. Taking
the article's example, if the argument types were of (user class)
'non_zero_float', there's no possibility for error.

You still have to check that your input is non-zero at some point, but you've
now focused it into one place (the 'non_zero_float' class ctor), and other
chunks of your program depending on those type semantics no longer need to
worry about it.

~~~
JoeAltmaier
You can really make that type do a compile-time check on runtime values?

It would be better to have some way of getting the compiler to optimize
constraints, perhaps by proving at compile time that the error is impossible.

------
AshleysBrain
Today with terabyte harddrives, gigabytes of RAM and broadband connections,
when is the binary size a more important factor than both execution speed and
ease of development? Especially when the binary size difference is probably
not huge?

Shouldn't the advice of this article just be "use exceptions"?

~~~
pmjordan
_when is the binary size a more important factor than both execution speed and
ease of development?_

Binary size (or maybe more accurately in this case, binary code layout) can be
highly relevant for speed due to the instruction cache.

As for ease of development, there are issues with C++ exceptions regarding
this as well: some C++ libraries aren't exception safe, and neither are
practically all C libraries. This is something you need to worry about
whenever you pass a function pointer into a library, as there might be an
exception-unsafe function higher up in the stack. Propagating an exception up
through it is potentially extremely dangerous.

That said, using exceptions can still be a good idea, especially if your code
doesn't need to be portable or if you know the platforms in advance, and you
are careful about passing around function pointers. All you need to do is
ensure that any of your code that might be called from third-party code with
questionable exception semantics won't throw or propagate any exceptions, e.g.
by installing a catch-all exception handler in it.

~~~
barrkel
Binary working set sizes are often lower with exceptions than without, because
exception handling code can be moved elsewhere by the compiler. Error-checking
code, on the other hand, cannot be so easily detected, and hence moved.

I think the C++ implementation of exceptions has a lot to answer for though,
in poisoning too many developers on the concept. It really is an awful
implementation.

~~~
pmjordan
C++ seems full of missed opportunities. I fear a lot of them are due to the
slavish backward compatibility to C.

------
gersh
A few issues. Does he actually benchmark? Things don't always work in practice
the way you would think.

If you are going for ultra-high performance, do you even have error-checking?
Do you write it in assembler?

