Hacker News new | past | comments | ask | show | jobs | submit login
On Rigorous Error Handling (250bpm.com)
67 points by rumcajz on Nov 17, 2018 | hide | past | favorite | 56 comments



> When possible errors are part of the function specification, on the other hand, we are almost OK.

This is the single best piece of advice in this article. The second thing you have to document is the postcondition in case of an error - what state is the program left in?

With both a normal and an error postcondition you can fully specify your program. Like the author, I'm convinced that most of the pain with error handling stems from programmers ignoring the consequences of an error. That's the reason why approaches that force you to deal with errors explicitly (Maybe a, Result e a, etc.) end up being more robust. Otherwise, a part of the program just ends up missing.

However, from a theoretical perspective, exceptions are superior. The reason is that the error postcondition really does represent a non-local exit. Just like an ordinary return statement, it should be implemented as one instead of forcing programmers to walk the stack by hand. The latter is both less efficient and more error prone. Additionally, resource management must be integrated with error handling anyway and exceptions provide a clean opportunity to connect the two. This is one of the things that C++ gets right.


> I'm convinced that most of the pain with error handling stems from programmers ignoring the consequences of an error.

This is one of my favorite empirical studies of software: Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems (2014)[1] It says that, indeed, many catastrophic errors happen because of ignoring the consequences of errors when the handling code was either empty (explicit ignore) or just logged the condition, even when the language enforced error handling. A simple tool they wrote to recognize it would have prevented 33% of the catastrophic failures they'd studied in Cassandra, HBase, HDFS, and MapReduce. So even when programmers are forced to explicitly respond to an error, they handle it with what amounts to a ¯\_(ツ)_/¯. I speculate that it's because psychologically we don't want to think hard enough about what to do when things that seem exceptional happen.

[1] https://www.usenix.org/system/files/conference/osdi14/osdi14...


A second example is Java's checked exceptions. They make errors part of the method's interface, but are almost universally hated because doing anything useful with errors is just too damn hard.


Well, it's the same example :) The point of the paper is that even when people are forced to handle errors, they punt.

BTW, that it's "almost universally hated" is more myth than reality. When there are polls at conferences, most developers actually say they like it a lot. The complaints are mostly not about the feature, but the choice of which exceptions thrown by methods in the standard library are marked as checked and which are not.


The trouble with checked exceptions is most of the time you want the exception to continue up the stack but you are forced to write unnecessary code.


If you want to propagate the exception, you don't need to write code, just to declare the method as throwing. Sometimes people don't want to do that because they don't want to declare the exception, but that means that you've decided that that should be the point where the exception is handled because the caller is not written to expect exceptions, and therefore the code is not unnecessary. It's true that in some cases this is forced on you because of how some of the standard library's higher-order functions work.


The problem in Java is lack of standardization. For example, there's no standard way of handling errors in callbacks. What exception should be thrown? How does the caller handle it?

Go's error handling isn't perfect, but at least they defined a single type (error) and made it idiomatic to use it everywhere. The same approach could have worked with checked exceptions, resulting in a language that has two kinds of methods: those that always succeed and those that can fail. This would result in a "what color is your function" problem [1], but with only two, obvious choices, it's liveable.

But there is no consensus in Java for how to say "this method can fail for a variety of reasons". (Many Java programmers believe that declaring a method to throw Exception is bad.) So you have a tower of Babel situation where methods can throw dozens or hundreds of different checked exceptions, many of which are incompatible, and lots of exception adapting at the boundaries, and long chains of wrapped exceptions.

[1] http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...


The reason you have many exception types is because handling of different kinds of errors is done in different places. I agree that in practice this is not often done, and maybe it is an overdesign.

BTW, exceptions don't exactly introduce the colored-function problem, because it's easy to catch an exception, handle the error, and stop the "color chain." In fact, that's the whole idea. With async/sync this either cannot be done, or, if it can, it comes at a significant cost.


When a database transaction fails, it rolls back to the state before the transaction. Exceptions ought to work like this too. Then you wouldn’t have to think about all the places an exception could be thrown. A try-catch block would either completely succeed, following the well-tested success path, or completely fail, leaving the program in its original state.


In C++ there is a thing called exception safety and there are three level:

* No throw: the function will not fail. Full stop.

* Strong exception safety: if the function fails the state of the object(s) is acting on is unchanged. This is similar to transactional atomicity guarantee.

* Basic guarantee. If the function fails the state of the objects is unspecified but valid (i.e. no invariant is violated), but data might be lost.

From the point of view of the caller of course no throw is the most desirable property, the strong and finally basic. Anything less than that (i.e. corruption, leaks, dangling pointers) is considered unacceptable.

Another important insight is realising that exception guarantees have little to do with exceptions and everything to do with postconditions in the return path: for example the same techniques used to guarantee strong safety on the face of exceptions also work to guarantee postconditions on the faceof multiple explicit retun paths.


Transactions and strong exception safety have two things in common: they're easy to use and hard to implement.


You are correct, but fully transactional semanatics by default would be extremely hard to do on a non gc-ed system language like C++. I could definitely see a language with such a feature though (transactional memory would be a good place to start I guess).


You can't un-fire the missiles.

This might be a great approach for some (plausibly very large) subset of cases, but it can't handle everything.


You can wait to fire them until commit time though.

Also abort sequences are a thing so you can kinda-sorta unfire them (talk about compensating sequence!).


> You can wait to fire them until commit time though.

Not in a way that truly solves the problem. Any time you are coordinating multiple actions that are irreversible and may fail, you'll need some contract other than "either your transaction exceeds or everything is rolled back."


> You can't un-fire the missiles.

True, but you can always offer at least the basic guarantee, and you can always document what you are guaranteeing to the caller.


Actually, C++ doesn’t really get this “right”, they mostly just get to say “we have exceptions in the language”. Exceptions are one of the most frustrating behaviors in C++ (e.g. lots of ways to outright crash your program, no way to really understand the full code path that an exception came from, easy to make serious mistakes like having code paths that throw exceptions in destructors).


This is one place where syntax that allows easy monadic composition really shines, because it becomes really simple for programmers to "walk the stack". Of course this still requires the "in case of error" state to be defined, but purely functional expressions can allow that to be trivial.

A good example is the Monad instance of Either in Haskell.


I hear what you're saying, but I confess I don't 'get' exceptions. On the spectrum of ways to report an error (C return values, Swift errors, C++ exceptions, Lisp conditions), they seem like an arbitrary spot in the middle that doesn't really give me the best of anything. They're neither as efficient as C/Swift, nor as flexible as Lisp.

Once you're going to go to the effort to walk the stack, why not give the caller the opportunity to continue? It's a huge increment in power for what seems like a minimal addition.


In fact, the addition is almost a subtraction. To make restartable exceptions to work, all you need is a way to search for points without unwinding the stack, and invoke closures there.

In TXR Lisp, I unified conditions and restarts into a single mechanism, which is called exceptions.

There are two kinds of handling frames: ones for which an unwinding takes place first, and ones which just intercept the search. Both are identified by an exception symbol which exists in an inheritance hierarchy.

It's all documented in detail here: http://nongnu.org/txr/txr-manpage.html#N-0146B946

There are dialect notes comparing with ANSI CL, and an example program shown in both TXR Lisp and a CL translation for comparison.


> They're neither as efficient as C/Swift, nor as flexible as Lisp.

My understanding is that, at the cost of a significant penalty for the (hopefully rare) exceptional case, error handling with exceptions can be faster than returning values like in C in the (hopefully common) successful case.


Is there a way to reconcile the two?

If the stack is walked up automatically programmers aren't going to deal with errors explicitly. No?


Error-handling code is definitely poorly-tested in my experience across many code bases. Although printf()-style logging has advantages, a huge disadvantage is that it tends to be the reason error-handling code fails: something meant to write a simple log message gets the format/type wrong and an error condition turns into a crash or obscure corruption. In fact, logging code that was once correct can become wrong if the target variable type changes. This is why I love Python format-strings with “{}”, a.k.a. the “just do the right thing here” syntax.

Generally the advice of “pick a few failure types and stick to them” is exactly right. You not only encourage error handling to take place but that code is likely to remain correct/complete over time.


An issue that can still happen with python's string formatting is that you can get the number of arguments/`{}` wrong. D (other languages too, probably) has compile-time format string, as well as automatic string conversion: `format!"%s"(2)`. Giving the wrong format string/argument types fails at compile time. Some C/C++ compilers also automatically check this for printf. Though they can still fail if the actual output writing fails, ie. stdout is closed or doesn't exist.


> An issue that can still happen with python's string formatting is that you can get the number of arguments/`{}` wrong.

Easily avoided by using the latest Python feature, added in 3.6: Formatted string literals, a.k.a. f-strings¹. Instead of using

  "foo {} baz".format(bar)
you use

  f"foo {bar} baz"
1. https://www.python.org/dev/peps/pep-0498


This is one of the reasons I love the newer C# string interpolation feature. No more frigging string.Format() strings with positional arguments.


Provided that you have a way of simulating an error, you can test the corresponding error-handling path. Then these paths will get executed whenever you run your test suite.

A coverage tool can be used to find any error handling code that wasn't tested. (But it won't help you find error handling that's missing altogether.)


> Programmers want to implement new features. Writing error handling is just an annoyance that slows them down.

For me, personally, this is backwards. As a programmer, I want to write error handling (especially for infrastructure), because it means I'll be able to work more quickly later. I won't have to debug through all these abstraction layers. It's the manager who always says "It (the demo = happy path) looks good, so it's time to move on to the next feature".


And because our error handling code rarely, if ever gets executed(I often see trivial mistakes that lead to crashes in error handling case), I think the erlang philosophy of “let it crash” is the best approach IMHO.


I like to pepper my code with asserts for that reason. But you can also get the case where the process crashes and get restarted in an infinite loop, so it's not a panacea either.


That’s where Erlang’s supervision model comes to the rescue. Restart until it becomes obvious something more serious is wrong and then raise a flag.

The nice thing about Erlang is that practically every line of code is an assertion and they’re all live in production. Such a huge advantage over development assertions that get thrown away for prod.


I think Rust got this right, too. No Exceptions, but a Result<R,E> enum return type and the compiler forces you to handle the error case as well by doing exhaustiveness checks.


Doesn't that lead to a lot of error handling code?

Another nice feature of the Erlang model is that, often, you can code the happy path and forget the error checking. Makes for much tighter/cleaner/easier-to-read code.


In many cases the question mark operator can propagate errors for you. It is both explicit and non-verbose.


I too like to do this yet I feel hobbled by the infinite loop problem on bare iron (watchdog fires off, provided it's not preprocessed out in the release). A sibling brings up the Erlang supervisor and I like failing fast and often when someone's around to see that the log's littered with restarts.

Is that the best one can hope for - to leave traces for my successor to pick up the pieces?


This is how you should use exceptions also. Let it throw. The top level exception handler (aka the supervisor) decides how deep the application is allowed to crash.


This might be a shot in the dark but does anyone know how to test error handling in Rails? Say for example you have a method that has a begin block with 10 lines of code in it and a rescue that does some logging or does a puts statement. How do you write a spec that tests the code within the rescue automatically without modifying the 10 lines of code? My google-fu on this one has failed.


I'm probably misunderstanding your situation, but why don't you (from the test suite) mock some function used in the begin block to throw an error and then `expect(the_logging_function).to have_received` the logging output?


That's not actually a bad idea at all. What however if you're just doing simple variables manipulations in those 10 lines and not calling methods (I realize that you can override the math methods as well)? Is there a way to handle that easily or inject an exception into the begin block?


Reading the article made me think that this is how medical doctors work. All diseases are codified and symptoms are errors. Doctors try to match the error to the error code and take action by prescribing codified medications.


This is how diagnosers work, which is an important part but not entirety of medical doctors. Research medicine is an extant thing.


With exceptions being classes, a library designer can vastly simplify it for the users by creating useful class hierarchies for the exceptions, allowing the user to be as specific or generic as they wish or need.


I've been writing code for 15 years (in Java, C++ and more recently JavaScript and Go). I have finally given up on exceptions and settled on return values (I particularly like Go's system). Exceptions, while theoretically superior, simply tempts even good programmers to just kick errors down the callstack. I prefer guard clauses [1].

[1] http://wiki.c2.com/?GuardClause


I like the functional approach with Try, basically the exception becomes part of the return type, and the code chooses to either return an exception or the actual value.

   fun doSomething(arg: X): Try<Y>
Since the return signature needs adjusting, this leads to developers very consciously making the choice to either handle the error in the function, therefore avoiding adjusting the return type, or let the caller deal with it if it isn't logical to handle the error there.


Is this the same as `Result<Y>` (where `Result` is a sum type that contains either a value of `Y` or an "Error")? I haven't see it called `Try` before.


It's a common name for the concept in Java, Scala, and to a lesser extend Javascript land. But it's just a name for a `T | Exception` type.


Why wouldn’t you want errors kicked down the call stack? There are only two types of exceptions in the grand scheme of things.

1. Things go wrong that are out of your control - network down, database down, etc.

2. Coding mistakes. Either in your code or input arguments.

In either case, why not let the end user decide how to handle the error? Sometimes it’s some type of retry pattern, others it just to have a big try catch block that logs the fatal error and alerts someone.


> network down, database down

These should be handled gracefully at each layer, and an appropriate error thrown to the layer above.


At the highest layer. I don’t want other layers to wrap the error and probably lose the original error and the stack trace.

If the system is database or network dependent, there is no graceful way to handle it automatically most of the time.


> probably lose the original error and the stack trace.

In particular, Python solves this by having a “raise … from” language construct:

  try:
      dangerous_operation()
  except DangerousException as e:
      raise ThisModulesException("Dangerous operation failed") from e
Then, the original exception (including its stack trace, etc.) is available as an attribute on the exception you caught.


I’m not really in love with that either. The equivalent paradigm in C# is using the “Inner Exception” what value did you add by wrapping the exception? The stack trace already has the line number and the method that caused the error all the way down the stack. In Python, the code is right there. You can just open up the Python file in a text editor and see everything.


> what value did you add by wrapping the exception?

By wrapping the exception, the user of “thismodule” can simply call it by writing

  try:
      thismodule.do_thing()
  except thismodule.ThisModulesException:
      logging.exception("Failed to do thing")
      othermodule.do_other_thing_instead()
And this user of “thismodule” is free from having to know that thismodule calls dangerous_operation() and/or raises DangerousException (which are probably both from a different module). This information will be shown automatically in the exception’s backtrace, including all line numbers of all wrapped exceptions, so it is not lost. But the code which uses the module is both shorter, simpler, and has less knowledge about internals of the module it is using.


How would that be any more helpful than just catching a generic exception and logging it and do_other_thing?


If a low-level module has an unexpected ValueError or ZeroDivisionError, I don’t want that to be inadvertently caught and hidden by my except clause. In general, I want truly unexpected errors to propagate to the top level and generate a proper crash.

Only in the case of code where I really need to do something if a specific operation fails, for whatever reason, do I use “except Exception:” or its even more catch-all variant, the bare “except:” clause. And even then, I very often just use it to log a message or send an e-mail, and re-raise the exception again afterwards.


That's the theory. In reality each layer just passes exceptions from lower layers on, leaving the end user with a mess that cannot be handled in any sane way.


To me that’s an indication that those lower layers aren’t lower layers.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: