Hacker News new | comments | ask | show | jobs | submit login
Isaacs: try/catch is an anti-pattern (groups.google.com)
238 points by jacobr on Nov 14, 2011 | hide | past | web | favorite | 136 comments



Anybody who wants to judge try/catch should first go and read about Common Lisp's condition system. See for example the chapter about conditions and restarts from the excellent book "Practical Common Lisp" by Peter Seibel:

http://www.gigamonkeys.com/book/beyond-exception-handling-co...

Notice I'm not saying you should go and program in Common Lisp, just that you should understand those ideas before you engage in any sort of meaningful discussion about exceptions, errors and error handling in general. Because, you know, some things have already been thought about and invented. Don't re-invent them.

And on a more practical level, in languages with dynamic binding it isn't difficult to provide error handlers and "send" them up the call chain, so that in case of an exception your handler gets called, fixes the problem, and lets the called function continue. You can do all that using try/catch as a low-level tool, I've seen it done in Clojure, using JVM's exception system.


I work in Windows file systems for a living. We very much live with two models: exception handling and canonical return codes. I cannot tell you how many times I would have killed for Lisp-like conditions. If I were to tell you that, however, I would also have to tell you that most of the killing would have been in vain.

The trouble with the never-ending "try-catch versus conditions versus return codes versus fail-fast" argument is that there is no easy way to have the conversation about big swaths of code. The log parser is a great example, yes, but it is exceedingly simple. The fact is that 'low' in the linked example is exposing its guts to 'high' whether it likes it or not, and that in any reasonably large body of code this too can become unmanageable.

Wherever you see a religious war, your Spidey sense should be tingling, telling you: people are arguing over which tool is better for all jobs when in fact you might want to learn all of the tools and choose the best whenever possible. And that you will at times show up to a job site where they're using the wrong tool and you'll have to learn to change them or live with it -- whichever makes more sense/is more possible.

That said, sometimes consistency just wins out. A module that throws to a module that returns is always baffling, but if it's hiding this fact from the rest of the module -- or many more modules -- then it's worth it.

(edited to note that these are Windows file systems, hoping to avoid 'is that really so')


Inspired by Lisp's condition system I wrote a ruby library which comes pretty close to Lisp's error handling. It also implements "restarts" ... a little bit messy though since, as you said, low-level execption are used to get it working. In one point the condition system bootstraps itself which is quite interesting to think about.

https://github.com/melkon/conditions

An example ("parse_log_file" and "log_analyzer show restarts"): https://github.com/melkon/conditions/blob/master/example/exa...

In the beginning I thought it can only be ported to Ruby because I used some Ruby-specific features. The exception solution (which I implemented later) should work in a lot of different languages though.

Another thing is, if you understood the condition system, you can use it for a lot of different things besides error handling. You can build protocols on top of that, event handling etc. Freedom is yours.


Have you considered using Ruby's throw/catch instead of exceptions? I did that for Atomy's condition system[1]. They worked great because you can just use the restart names for the throw/catch tags.

[1]: https://github.com/vito/atomy/blob/master/kernel/condition.a...


I might have considered throw and catch as I wrote the library, but right now, I cannot think of any good reason why I haven't used it. Will check that again. Thanks for the suggestion.


Spot on, but CL's error generation/capture/recovery mechanism only helps in single-threaded code composition. Erlang's model of linking up processes so that a "supervisor" process get notified if a servant dies is the counterpart in a concurrent scenario. Together, they seem to me to cover most of the ground.

Furthermore, in Haskell you can throw an exception to another thread, though I'm not sure whether that's any more valuable as a design tool than a plain message passing channel. (see also "throwTo & block statements considered harmful" http://www.haskell.org/pipermail/haskell-cafe/2006-December/...)

Go's "defer" is nice syntax btw.


Good point! I never actually considered this, because almost all Common Lisp code I wrote was single-threaded (CL and threads aren't friends), and nowadays I write in Clojure, where I just stick to fairly plain catch/throw.

This is a great example of why it is always worth it to learn various languages, not just stick to what you know. You regularly get eye-opening revelation moments.


This seems to be exactly the attitude of Go:

"We believe that coupling exceptions to a control structure, as in the try-catch-finally idiom, results in convoluted code. It also tends to encourage programmers to label too many ordinary errors, such as failing to open a file, as exceptional.

Go takes a different approach. For plain error handling, Go's multi-value returns make it easy to report an error without overloading the return value. A canonical error type, coupled with Go's other features, makes error handling pleasant but quite different from that in other languages.

Go also has a couple of built-in functions to signal and recover from truly exceptional conditions. The recovery mechanism is executed only as part of a function's state being torn down after an error, which is sufficient to handle catastrophe but requires no extra control structures and, when used well, can result in clean error-handling code."

http://golang.org/doc/go_faq.html#exceptions

In Go, if there's a programmer error, call panic(); if there's a non-programmer error, return it as a second return value.

[Added] Plus, it's obvious to see where people ignore errors:

   f, _ := os.Open("filename")
"_" is a throw-away variable to indicate that this value won't be used in the code. It's obvious that the programmer decided to ignore the error.

   f, err := os.Open("filename")
If you don't use "err" later, this won't compile.

   f, err := os.Open("filename")
   if err != nil {
      // handle error
   }
This code handles error.


This is a bad idea that keeps coming back again and again.

I see nothing wrong with exceptions, I do have a problem with (1) checked exceptions, and (2) catching exceptions prematurely and (3) people not learning how to use "finally" so they do (2) and rethrow.

Languages like Go and Scala roll out various mechanisms that bring us back to the bad old days of C, when we had to check the return/value and or the error code after every function call... if we wanted error handling to work.

The trouble with this approach is that it increases code bulk. For CS class projects, this isn't so bad, but when you're building real systems, the complexity of the error handling can approach or exceed the complexity of the "normal" path and when that happens you're in deep trouble.

Exceptions drastically reduce code bulk by introducing default "abort" behavior, which can itself be aborted at any level of the program and which can invoke cleanup anywhere in between.

Many programmers in many situations would be perfectly happy to catch "failure to open a file" and "failure to open a database connection" and "failure to connect to a network host" with one simple handler that logs the failure and either aborts, retries or ignores.


when you're building real systems, the complexity of the error handling can approach or exceed the complexity of the "normal" path and when that happens you're in deep trouble.

Go is a systems programming language, and a large part of systems programming is dealing with the potential errors. Take a look at, say, how the Linux kernel implements a system call. In fact, this is do_mmap_pgoff(), which does the bulk of the work for a mmap() system call in Linux: http://lxr.linux.no/linux+v3.1.1/mm/mmap.c#L942

It's almost nothing but error checking, and I submit that is as intended. In this circumstance, you want all of the error checking right in front of you, because that error checking is enforcing very important kernel policy. A lot of kernel code is error checking, because it has a lot of policy to enforce.

You're right that this error checking paradigm is a throwback to C, but Go was designed as a better C. Go's means of handling errors is exactly what I wish I could do in C; it allows every function to both return a value and an error code. It avoids passing in pointer to values because the function returns an error code, or having to check a global errno because the function returns a meaningful value. It's kinda like living in a world where C++, Java and Objective-C were never invented. (I like and use C++, so please don't take that as jumping on the C++-is-the-worst-thing-ever bandwagon.) I find that a very interesting direction, one which should be explored.

I use exceptions for higher-level code. When writing, say, a parser, I'd rather throw and catch exceptions. I don't think we need to choose one error-catching paradigm and declare it's best for all levels of code.


When writing, say, a parser, I'd rather throw and catch exceptions.

Which is also possible in Go, and template parser from its standard library is, actually, written in this style:

http://golang.org/src/pkg/template/parse/parse.go#L96

Functions call panic() (via t.errorf) to avoid passing errors between multiple levels of functions, but then Parse method catches it with recover(), checks what kind of error it is, and either panics again if it's a runtime error, or returns other kinds of errors.


> when we had to check the return/value and or the error code after every function call... if we wanted error handling to work.

Errors as return values force you to think about every possible error, which is a good thing for code quality. Look at how much rock stable C software we have out there. Software that can be compiled on many different architectures, run in many different environments, and it all just works, even 20 years later.

Writing code with try/catch is much less work. Not because you have to do less typing, but because you simply think less about how errors should be handled.


And you know what? In my experience it's not less robust.

So my code hasn't had every possible failure case thought through and explicitly handled in advance. That's good. Firstly some of those errors are so rare they'll almost certainly never occur in my program's lifetime; by not having to handle them explicitly and individually I save time and money. Secondly I can guarantee that, no matter how good I think I am, the program will eventually find a way of crashing I'd not considered; this approach gives me a clean means of handling unforeseen errors as well.

A bad programmer can write bad code in any pattern and with any tools. A bad C programmer using return values can create so many different standards for how to handle errors that you might as well get out the divining rods to read the code. Personally, done right (as with anything) I happen to like try...catch.


Just because you think a certain error will never happen for the lifetime of a program doesn't mean that not explicitly handling it is equally robust.


No, I agree, but there's a cost/benefit calculation to be done. Plus it's not like the try...catch solution that doesn't explicitly handle the error will blow up and destroy everything; if the program is properly designed it should merely degrade onto the path of general handling, log the failure and halt whatever was being done.


In a modern programming environment you can't think about every possible error. In particular, code often migrates into distributed systems where a whole new range of problems can happen.

For instance, a system might have plug-ins that get data from a CSV file, a relational database and a web service. One day somebody comes along and adds a plug-in that gets data from a noSQL database.

Add a new component to the system and you introduce not only new failure modes but new ways failure impacts the "system a s a whole."

The more decisions you make, the more you will make wrong decisions. Exceptions provide a reasonable default behavior that is "decision free" and reasonable ways to upgrade it.

20 year old C software lived in a simpler world; only specialized network utilities like telnet and tcpwrappers would have to face the consequences of a failing DNS lookup or the temporarily failure of a network switch in Toledo.


It is a more complex world, but I think that's orthogonal to whether you prefer return codes or exceptions.

In fact, I think you can argue that keeping error-handling local to the call site (return codes) encapsulates and abstracts the errors better than letting an exception propagate arbitrarily far up the stack.

Exceptions are decision-free, but not making a decision (propagating an exception without handling it) doesn't make you any more robust to the vagaries of the modern world, it just moves the problem somewhere else.

Now, if multiple children in a call graph can experience the same error, and should be dealt with in exactly the same way, then propagating an exception up to a common ancestor in the call graph makes your code simpler. The fact that the compiler writes that dumb plumbing code for you is a great argument for exceptions, but not all applications fit the use case of:

  * Same error can manifest itself in many places
  * Each instance of the error can be dealt with in a similar-enough way to make a common exception handler simpler than handling errors at each call site.
Edit: Oh, did you mean that if you don't intend to handle an error, exceptions crash your program (good) rather than let it continue silently and do something you don't expect (bad)? If so, good point, and my apologies for the misunderstanding :)


At least in languages like Haskell you can use things like Monads that allow you to write the high level code and do the error routing for you behind the scenes. In C manually writing the error handling is a painful and error prone process.


I'm sorry, but this is a very bad example. iptables had an issue for a long time where error code is not carefully preserved in many situations and you end up with messages like:

    iptables: Unknown error 4294967295
This wouldn't happen with exceptions - even if not handled properly, you'd see where is it originating and what's the most probable cause of the issue. And it's not necessarily iptable's fault - in some cases you have to really bend some rules to get the error you want. Also you cannot stack them so if you fail to cleanup after the original error, what do you return? The first or the second error? One has to be ignored.


Return the first one. You always care more about the original error than an explanation of why you failed in cleaning up after it.


Counterexample: you failed parsing some number correctly, but the backend where you log broken transactions is corrupted and cannot be written to. You'd rather drop the second information?


Ah, so you're saying that failing to parse the number breaks the transaction.

Fair enough, you'd rather (as a human being) have the second one. As a matter of building the system, I would still return the first one.

Basically, the second information is important in the sense that you really want an audit trail, but unimportant in that it doesn't tell you why your call failed. Returning it to the caller is useless compared to returning the first error.

But yes, it should certainly be logged and potentially acted on. An application can't do anything useful with the information, but an ops guy can.

So return the first one.


> Look at how much rock stable C software we have out there.

Which would be what exactly. The work of Knuth and djb I'll grant you, but the rest?


Didn't Knuth write TeX in a variant of Pascal?


*BSD kernels, GCC, tar, gzip, ...


BSD kernels have had numerous exploits over the years. GCC is a massive hairball. gzip has had exploits (http://www.kb.cert.org/vuls/id/381508 for one).

Care to try again?


Without defending any given piece of software, the job of the exception advocate is not to prove that return values suck, but that using exceptions produces more stable software as fast or faster. This question cannot be resolved by examples/anecdotes alone.


This is not my experience at all. Return codes are far too easy to ignore. I would argue that the fact that C software works after 20 years is because it's been debugged for 20 years, everything that can possibly happen to it has probably happened and been handled. It's not because error return codes are a fundamentally better way to do this.

I know it's kind of unhip right now, but I would argue that what you're asking for is better handled using checked exceptions. That really forces you to think about error conditions, and it's enforced by the language. I'm continuously baffled by the argument that this is a bad idea.


How do you feel about Erlang's "let it fail" policy? I personally was afraid of it at first thinking you couldn't write stable code, but the result was quite the opposite. Things fail, and get started back up by supervisors and everything is happy. No error checking and code bulk, no try/catch nonsense littered all over the code.


just curious: what are you using erlang for?


Backend / Server software, build tools, general scripting (using escript).


Languages like Go and Scala roll out various mechanisms that bring us back to the bad old days of C, when we had to check the return/value and or the error code after every function call... if we wanted error handling to work.

I cant speak for Go but Scala's Option type is very different from C error codes since it is a monad. As a result you can use it with Scala's 'for' comprehensions (which are roughly the same as Haskell's arrows) and write code that looks roughly imperative without having to explicitly handle any errors.

http://www.scala-lang.org/api/current/index.html#scala.Optio...


I think the point is that your specific cases:

> Many programmers in many situations would be perfectly happy to catch "failure to open a file" and "failure to open a database connection" and "failure to connect to a network host" with one simple handler that logs the failure and either aborts, retries or ignores.

aren't exception appropriate a according to a certain portion of developers. The reason they aren't exception appropriate is that you should expect these things to happen from time to time, ie they are not exceptional.

using exceptions to handle them is part of the problem Go attempts to solve by having multiple return values.


As far as I can tell either your point is a circular argument or it's an English-language nomenclature complaint fixable by s/exception/fooglewoo/.

Either way it doesn't address the real argument, about where it is appropriate to use try and catch. What is inherently better about multiple return values at every level, compared to semi-centralized catch blocks?


It's not really an objection to terminology. "Fooglewoo" handlers would still exist outside the normal flow of control while handling things that many people would consider routine.

Burying that kind of every-day, expected-circumstance logic in a side channel is, at least in some developers opinions, detrimental to the readability and understandability of a program.


I'm sorry, I can't really parse your argument.

My point was that there is a school of htought that exceptions should only be used in "exceptional" circumstances.

The examples given were not exceptional in that a programmer should expect those types of errors during the normal execution of their program. Therefor exceptions aren't the solution to those types of errors.

Does that make more sense to you?


But using that school of thought to support that school of thought isn't an argument, it's a circle.

PaulHoule is explaining why he likes a specific mechanism compared to another, and you are only replying with the previously-established fact that there exists a disagreement here, not a counterargument.


I get what your saying, I just disagree.

And I think alot of people take my point of view as evidenced by the votes my initial post received.

No worries, thanks for the point of view:)


I do have a problem with (1) checked exceptions, and (2) catching exceptions prematurely and (3) people not learning how to use "finally" so they do (2) and rethrow.

I am probably one of those people who catches exceptions prematurely and who hasn't learned to use "finally." If you link to some advice on how to use such things, I'll read it. I want to believe that I can learn a better way to use exceptions, but they just haven't clicked for me.

Then again, I've been using exceptions in an environment (C#) that matches the author's JSON.parse() example very well. The .NET library designers already decided what counts as exceptional, and it's often not possible for me, as a .NET user, to decide much of anything about the use or placement of try/catch.

...when you're building real systems, the complexity of the error handling can approach or exceed the complexity of the "normal" path...

I can understand all too well that a complex error handling path is intimidating, tedious, distasteful, and no fun at all. But in every serious project I've been a part of, error and exception handling has been where the bulk of the design, implementation, and testing work was done. That stuff is precisely how serious systems distinguish themselves from toys. Having a catch-all crash-path doesn't change that, because the serious system is not permitted to crash. As the author here points out, that's what assert() has offered for decades anyway.


>The .NET library designers already decided what counts as exceptional, and it's often not possible for me, as a .NET user, to decide much of anything about the use or placement of try/catch.

What do you mean? You can certainly decide what's exceptional. You can roll your own exceptions. You can catch and discard or handle exceptions you don't want to bubble up. You can put try-catch everywhere or nowhere (or choose a reasonable place in between). Your hands are not tied by .net exceptions any more than they are tied by any other reasonable error handling model.


You can certainly decide what's exceptional.

I can decide to make something in my code exceptional, sure. I can't decide to make something in the library not-exceptional, though.

You can put try-catch everywhere or nowhere (or choose a reasonable place in between).

This isn't always the case. Sometimes the only way to answer a question ("Can this string be parsed as an integer?") is to try it and catch the exception. And there's just no way to do that coherently without catching it immediately.

Later releases of .NET do include a non-exception-throwing TryParse() call in many places. I'm pretty sure I've run into some cases where that was not available, however, and I'm pretty sure I've run into similar situations in other methods besides Parse(). And TryParse() was a late addition; look back in the 1.1 or 2.0 docs, and it doesn't exist.

In other words, somebody thought it was reasonable to force .NET users to catch some exceptions immediately.

Your hands are not tied by .net exceptions...

No, but there's not much point in using .NET if I'm not using the library that comes with it. And the design of the library does tie my hands in some cases.

I think the author makes a great point about libraries that use exception handling blurring the line between bugs and expected problems. That's exactly how I feel about the C# work I've done.


> I can decide to make something in my code exceptional, sure. I can't decide to make something in the library not-exceptional, though.

Making something in the library non-exceptional is equivalent to discarding an error. Catch the exception and discard it. Done. Do this at whatever level you feel is appropriate (or don't, and handle the exception in a more reasonable fashion).

> In other words, somebody thought it was reasonable to force .NET users to catch some exceptions immediately.

What do you suppose should be done? The other option seems to be to continue in a erroneous state. I'm probably not familiar with every possible error-handling methodology, but it seems the main ones are "error codes" (those worked so great in C, right), "exceptions" (annoying, but error handling in general is annoying), and "injected handling" (where the caller can inject error handling code somehow; but this is more complicated and requires deeper knowledge about the callee). Is there a better option?

> No, but there's not much point in using .NET if I'm not using the library that comes with it. And the design of the library does tie my hands in some cases.

My point is that you are not tied by .Net any more than you are tied by any other exception-handling language. You can handle exceptions where and how you feel is appropriate.

> I think the author makes a great point about libraries that use exception handling blurring the line between bugs and expected problems. That's exactly how I feel about the C# work I've done.

I think this is a bit of a red herring. An exceptional condition is simply a special case. A bug in the code is an exceptional condition. A problem parsing a number is an exceptional condition. Modern languages generally have specific exceptions to allow you to handle different situations with custom logic, but it's important to note that the runtime can't reliably distinguish between "bugs" and "expected problems". Did your number parsing fail because the user entered an invalid value or because your code grabbed column 3 instead of 4?


Making something in the library non-exceptional is equivalent to discarding an error ... The other option seems to be to continue in a erroneous state.

I guess this is where we differ. I feel that the designers of the .NET library have chosen to throw exceptions in places where nothing exceptional is actually happening, where no error has occurred, where the programmer may very well be expecting the "exceptional" outcome.

Where my code must handle such conditions, forcing me to handle them as exceptions makes my code longer, less readable, harder to change, and harder to reason about.

...it's important to note that the runtime can't reliably distinguish between "bugs" and "expected problems".

This is exactly what the author of the linked piece points out, this is a part of my complaint, and it's an issue Microsoft has tacitly acknowledged the seriousness of by the addition of alternatives to exception-throwing calls, like TryParse().

The problem from my perspective isn't the runtime or the languages that target it, but choices made when the library was designed.

It does not feel to me, as a user of these massive libraries, that there was any systematic way of deciding what should be and what should not be reported as an exception.


> I guess this is where we differ. I feel that the designers of the .NET library have chosen to throw exceptions in places where nothing exceptional is actually happening, where no error has occurred, where the programmer may very well be expecting the "exceptional" outcome.

When does this actually happen? Are you really expecting it to fail when you open a file, or when you parse an integer, or whatever else? Where is .Net throwing exceptions in cases that no error has occurred and that you expect?

This complaint is common, but it feels rather hollow to me. Most of the time it seems to come down to a preference for error codes over exceptions, or an annoyance with the try-catch boilerplate, rather than a legitimate complaint about exceptions being thrown inappropriately.

> Where my code must handle such conditions, forcing me to handle them as exceptions makes my code longer, less readable, harder to change, and harder to reason about.

Are you suggesting that every function should have two versions like Parse and TryParse? Is this really what you'd prefer the .Net team work on, instead of providing new tools and functionality? Or are you wanting something like "ON ERROR RESUME NEXT" so that you can ignore these "expected" errors?

> This is exactly what the author of the linked piece points out, this is a part of my complaint, and it's an issue Microsoft has tacitly acknowledged the seriousness of by the addition of alternatives to exception-throwing calls, like TryParse().

Eh, the linked piece seemed mostly to be pining for the days of error codes. There's no general way to determine if a failure is a "bug" or "expected", not for exceptions and not for anything else. If you want to avoid exceptions for "expected" failures, then you're asking for no exceptions at all, which is fine, but the problem isn't just the definition of "exceptional".

> It does not feel to me, as a user of these massive libraries, that there was any systematic way of deciding what should be and what should not be reported as an exception.

The systematic way was "it's exceptional if it's not the desired or expected outcome". The addition of TryParse was a nice bonus, but is in itself an exception to the exception model.


When does this actually happen?

Oh, for Pete's sake. I'm not learning what I hoped to here, and you've made up your mind.


What were you hoping to learn?

Can you give me a practical situation where an exception is thrown despite there not being an error, aside from the canonical Integer.Parse() example? In my experience, that's not the bulk of any practical program, and it's still exceptional from the point of view of the Parse function.

I'd be interested in discussing this, but I'm not really sure what you think would be an improvement. And yes, it's a rather uphill battle if your proposal is to use error codes most of the time. I think that ship already sailed (although there's a strong case for error codes in C++, but that's kind of a special case).


An example comes to mind, from the very issue I'm working on at the moment:

System.ComponentModel.Win32Exception: The operation completed successfully at System.Drawing.BufferedGraphicsContext.CreateCompatibleDIB(IntPtr hdc, IntPtr hpal, Int32 ulWidth, Int32 ulHeight, IntPtr& ppvBits)


That's really terrible, and I can't defend that.

However, in the interest of trying to be helpful, my guess is that you have a bug in your app that plays poorly with a bug in Win32. A quick search for BufferedGraphicsContext.CreateCompatibleDIB yielded this question on SO (link to top answer) which indicates a resource leak may be at fault:

http://stackoverflow.com/questions/1209769/system-componentm...


How does Scala contribute to this problem? It does not have checked exceptions.


This:

  > We believe that coupling exceptions to a control
  > structure, as in the try-catch-finally idiom,
  > results in convoluted code.
Seems at odds with this:

   f, err := os.Open("filename")
   if err != nil {
      // handle error
   }
You're still using a control structure (if-statement) to handle the error.


But that control structure isn't specific to handling errors, unlike try/catch. I think that's what they meant.


I don't think this style is much better than exceptions. I think Go could have done itself a favor if it has variants + pattern matching. I find those to be, in many cases, superior to exceptions as the compiler checks you are handling everything and you can easily encode success and failure in the variant type.


This is exactly what I observed when using OCaml: Although it provides exceptions, I almost never used them. Most things were a lot simpler to implement via pattern matching (http://caml.inria.fr/pub/docs/oreilly-book/html/book-ora016....).


For some context here's another error handler in Google Go:

    defer func() {
        if r := recover(); r != nil {
            if err, ok := r.(runtime.Error); ok {
                if err.String() == "runtime error: index out of range" {
                    // handle bad index
                    return
                }
            }

            panic(r)
        }
    }()
vs

    try {
    }
    catch (IndexOutOfBoundsException ex) {
        // handle bad index
    }
A language where you have to resort to string compare to handle invalid array accesses can't be use as a model for good error handling IMO.

Of course that's just the tip of an iceberg with no way for IDEs/tools to know what return is an error, no way to know at a glance if "_" was an ignored error or ignored other extra return value, plus other fundamental problems caused by implicit types.


Whose crazy code is that? NOT idiomatic. If you need to recover from bad slice indexes then your code is broken and you have way more to worry about than a messy recover closure.


> If you need to recover from bad slice indexes then your [error handler] code is broken

You bring up a good point that when resorting to value comparisons for error handling instead of type comparisons it's easy to make mistakes. The code is as far as I can tell a simplest way to handle an index out of bounds, but to also handle a 'slice out of bounds' it would need to also compare to the string value "runtime error: slice bounds out of range" -- not helping the case for error handling in Google Go.

This code strikingly shows deficiencies in Google Go non-local error handling:

- Tons of boilerplate (defer, recover, re-panic, type check, value check)

- Not scoped so can only handle an error once per function

- Have to do value tests in addition since types are very generic due to implicit interfaces

- Error values are poorly defined

- Result for higher-level caller is buried deep in the function and non-obvious

These problems extend to all non-local error handling in Google Go, not just for this specific case of array and string indexes. That the idiomatic way to 'handle' an error is to abort the program is another separate problem.


> You can't reasonably argue that this:

  try {
    foo = JSON.parse(input)
  } catch (er) {
    return "invalid data"
  }
is more lightweight than:

  foo = JSON.parse(input)
  if (!foo) return "invalid data

This is like saying that walking is faster then driving because I can walk 5 meters faster then it takes me to get into a car. Yes, error codes are more compact in a tiny "Hello world" example because it is only showing one function call. Exception handling becomes more compact when you're writing something less trivial and you don't have to repeat the same error handling code after every call.

> Try/catch is goto wrapped in pretty braces. There's no way to continue where you left off, once the error is handled.

Don't throw exceptions if you can handle the error and continue where you left off. Exceptions is for when you can't continue. Think of throwing as a way to roll back transaction, stop whatever you were trying to do, and go back to the last consistent state.

Obviously, exceptions are not perfect. As author correctly notes, they require careful consideration of what's exception and what's part of normal flow. But they were invented for a reason, and I don't see the article offering any alternative solutions to the problems that exceptions are solving now.


> Don't throw exceptions if you can handle the error and continue where you left off. Exceptions is for when you can't continue.

Yes, although quoting that example doesn't support the assertion. JSON.parse is a library function. How can it judge whether the caller can continue or not just because the JSON cannot be parsed?

> Think of throwing as a way to roll back transaction, stop whatever you were trying to do, and go back to the last consistent state.

Any try block that is larger than one atomic operation can become a nightmare to roll back, since the catch gives you no idea how far it was into the block, what resources were allocated, etc. So although try/catch avoids the hassle of checking state after each operation, you pay for it on errors.


> JSON.parse is a library function. How can it judge whether the caller can continue or not just because the JSON cannot be parsed?

Don't make assumptions about the caller, throw if your library can't continue.

> So although try/catch avoids the hassle of checking state after each operation, you pay for it on errors.

If your language supports RAII (http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initial...) you get a free ride. Otherwise you have to write non-trivial cleanup code regardless of how you return errors. Try/catch (and especially finally) helps a lot because you get to group all your cleanup in one place. With error codes the code looks something like:

  a = acquire(A)
  if (!a) return

  b = acquire(B)
  if (!b) {
     release(a)
     return
  }

  c = acquire(C)
  if (!c) {
     release(a)
     release(b)
     return
  }

  ...
This is a maintenance nightmare. It's hard to see what the code is doing because the real logic is hidden between piles of error handling stuff. If you add a new resource acquisition in the middle you have to go over every statement that follows and modify error handlers. And don't you dare changing the order of statements, because that basically forces you to rewrite the whole function. Believe me, you don't want this in your code, exceptions are your friend :)


No, the way to handle that pattern is with nested gotos:

    a = acquire(A);
    if (!a) goto err_a;

    b = acquire(B);
    if (!b) goto err_b;

    c = acquire(C);
    if (!c) goto err_c;

    do_stuff(a,b,c);

  err_c:
    release(b);
  err_b:
    release(a);
  err_a:
    return;
This is precisely why goto is not universally evil.


Consider:

  a = acquire(A);
  if (a)
  {
  	b = acquire(B);
  	if (b)
  	{
  		c = acquire(C);
  		if (c)
  		{
  			do_stuff(a,b,c);
  		}
  		release(b);
  	}
  	release(a);
  }
  return;


> I much prefer php's json_decode function, since it just returns `null` on invalid input.

A function which has the same result in case of an error as when given valid input (hint: 'null' is a valid json string) is neither good design, nor something I would actually 'prefer'.

Aside of that (and more to the point of the original article), I do believe that exceptions can be very useful the deeper the abstraction of your libraries gets.

If you are 'far down' the stack and you want to ensure that you are in a position where it's safe to proceed and making it very, very clear for the caller that something went wrong, trow() is the perfect tool in order not to be called with a garbage argument on a successive call to a different function (hint: error results tend to get ignored).

And if you are a user of a library, getting nice exceptions can be very handy too - sometimes even wrapping and re- throwing them.

A very good example of real code: In a command line script which processes a lot of text-data to import into a database delegates to various importer classes depending on import line type.

All these importer classes do their thing and whenever they come across an issue, they just throw an ImporterException with all the context that they know about.

The parent script only has to deal with one single case of Exception to produce nice error messages and show everything about the context where they happened.

I can make the importers as complicated as they need to be and I never have to check a single return value (or forget to check it). There is one central place to handle whatever kind of Error that can creep up.

This is very handy.

Granted, in JS/node where each callback cleans out the stack so that a thrown exception can't be handled by whatever caused the callback to be executed later), exceptions are useless and actually harmful because they can mess up the program flow (which assumes callback to be triggered eventually).

But exceptions being useless in one environment doesn't make the useless everywhere.


I thought a JSON document had to have either a top level object or array, which would make a bare null an invalid JSON text. Granted, we might not be that strict all the time.

See section 2, paragraph 2: http://www.ietf.org/rfc/rfc4627.txt?number=4627


The issue with the JSON spec is that it's deceptively simple. For instance, this particular issue comes up relatively often. As you point out it's quite specific that a valid "JSON text" has a top level object or array. This assumption is even required for the sections on "detecting character encodings" (which makes me wince to this day).

The particular issue is that the spec refers to "JSON text" (which no one ever uses in practice) as well as "JSON value" which is what everyone expects. The difference being that a "JSON value" is any of the 'bare' types (null, boolean, number, string, object, or array). A quick survey of JSON parsers will show you that most will accept any of these types at the top level. The one notable exception I'm aware of is Ruby's JSON library (the default one? I'm not so hip to the Ruby).

Notably, though, is that the major JavaScript interpreters don't enforce this constraint (Nor do Python or YAJL (kinda) or Erlang (even the ones I didn't write)).

Don't bother asking me about invalid combining characters as \u escapes. You wouldn't like me when I get the rage eyes.


It was pointed out later in the thread that it should have been 'undefined' rather than 'null'.


I'm not keen on exposing undefined to external code; it inflates it into a “valid” value like null has become.


I'm genuinely curious: how is returning undefined any different from returning null, in this case?


If you return null, there is no way to discern between invalid input data and 'null' as input.

If you return null for 'null' as input and undefined for invalid data, then you don't have to look at the input again to check whether you just failed to parse json or whether the input was just 'null'.

   if (HypotheticalJSON.parse(input) === null && input.trim() != 'null'){ alert('invalid input'); }
instead of just

   if (HypotheticalJSON.parse(input) === undefined){ alert('invalid input'); }
Though as I said in my comment, I do believe throwing an exception to be a valid action here - at least in any other environment than server-side JS where you have to be a bit careful.


Ok, so its the case that null is valid data and makes for a bad "invalid value".

Personally, this demonstrates a core reason why return values (and returning null in particular, in any language) are a bad way of handling errors - they're a leaky abstraction and too much inside knowledge is required to use them safely. For example, if I have code like

    a = foo()
    if (a is valid) bar(a)
I have to know about the possible values of a before I can pass it to bar. Exceptions are better because I don't have to know the internal details (foo and bar could be library functions that return some data I don't know all the valid values of):

    try {
        a = foo()
        bar(a)
    } catch (e) {
        handle error here
    }
But I also agree with the post that exceptions aren't always great either.

Instead of returning null, I like haskells Maybe. You either return a valid value (which can be null, in this case) or it returning Nothing, which can never be something which is also valid data. This removes the possible ambiguity of whether null is an error value or a valid value and the haskell compiler makes sure you hanbdle the error case, but its still a signal-error-through-return-value method of error handling and is not always ideal. I also like the ideas behind common lisps condition system, though I have never used it in real code. I like the idea of abstracting the error signalling, handling and routing into separate entities that can happen at various parts of the hierarchy. I also like Go's defer/panic/recover mechanism, though, again, I have never used it in real code, so don't know how well it works in reality.


> ... how is returning undefined any different from returning null ...

Because this valid JavaScript might have been silently slipped to the interpreter:

    undefined = true;


It doesn't really matter what the "this is not valid json" return value is.

I just don't think that it's worth the cost of capturing a bunch of stack frames for such a common occurrence as "you posted bad data to my web service".


It looks great on simple examples, but now consider your stacktrace is 5+ levels deep with some generators doing their generalised work on some collection. If the last function fails, then you better be sure that every single level:

- returns the error message and context instead of just error code

- wraps the internal problem properly inside its own error description and context

- cleans up all internal state

Guess what... that's exactly what exceptions do. If don't do the first point, you'll end up with multiple causes producing the same error number. If you don't do the second, you'll get "invalid data" when parsing json (where? what data? which line? which element? why is it invalid?). If you don't do the third, you're going to crash anyway.

So if you don't use exceptions to make your code nicer, you're going to end up implementing the same flow over and over again in places that could just allow the exception to pass through. You might even write some helper/macros functions to... add a file and line number when wrapping previous exception. It feels like reinventing the wheel.

Edit: Forgot to mentioned the collection processing in the end. What is tricky about it is that for a common "map()" to stop on error, you'll need to have a common way of handling errors. Exceptions do this just fine, custom methods - not necessarily.


> At least JavaScript doesn't have typed catches. Holy black mother of darkness, that shit is intolerable.

I would like the author to expand a bit instead. In python there are typed catches, it seems to make a lot of sense to me: you catch only the exceptions you want, and let the other ones bubble up. It is well explained in Martelli's Python in a Nutshell.

I have seen try/catch construct in Java and Javascript, and it is probably less readable, and certainly can be over-used, but in Python, returning None in all exceptional cases is annoying, different issues got merged into a single "Muted" case.


Rule #1 of programming articles: Be wary of anyone who states "always" or "never" or "harmful" or "evil" with regards to a code construct/idiom/pattern.

What they're telling you, in effect, is that they know everything there is to know about X, and so you should trust their judgment and not think for yourself.

And in every case I've seen so far (including the famous goto debacle), they have merely focused on their pet code, ignoring the actual cases where the construct is useful (most often, they'll include trivial examples when the construct is meant to be used in complex situations), and this article is no exception.

When your argument becomes "we should throw this out because bad programmers use it badly", you have no argument.


I think that having alternatives to try/catch is generally worthwhile. Exceptions should only be thrown in exceptional circumstances.

So (to take some C# code as an example), I'd only call int.parse(myString) if it really, really should only get called when myString contains something that's parseable to an integer.

If it just probably contains an integer, then I'd call int.TryParse(myString, out returnedInt), and check the boolean return to see whether it was valid.


This part really sunk in for me in Bruno Jouhier's reply:

> "So the big mistake I've always seen people make is being too "nervous" about exceptions and feeling that they have to do something about them as close as possible to the point where the exceptions were raised. They need to the exact opposite: feel relaxed about exceptions and let them bubble up."

Hadn't really thought about it in that way, but I find myself employing this pattern where possible - having said that, with async callbacks it can be hard to bubble up when you essentially have multiple logical processes occurring.

Haven't found a solution to this in my own codez yet but I feel a little twitch in my eye whenever I have to do:

    try {
        something = JSON.parse ...
because there's no native `JSON.validate` method.

Not complaining though, definitely not worth losing sleep over .. yet.


> "So the big mistake I've always seen people make is being too "nervous" about exceptions and feeling that they have to do something about them as close as possible to the point where the exceptions were raised. They need to the exact opposite: feel relaxed about exceptions and let them bubble up."

While the principle seems to make sense, I find it unusable with GUI like apps. GUI have hundreds of entry point from various events. If I want to catch exceptions far from their raise I need to do it in each one of those event handler, so that they don't bubble through librairies, literally hundreds of times.

Every beautifully written code I've seen has always been CLI programs. Exceptions probably works for server code where you can crash your single process and/or redo it from your original request. GUI and frameworks apps can't really use crash as an acceptable behavior.


>Every beautifully written code I've seen has always been CLI programs.

I might be a fairly novice coder, but I think a good solution for this would be a client/server model. I'm a full-time Linux user and for various reasons, I really like application which have a daemon mode, with the UI built as a client accessing said daemon. Good examples are mpd or deluge.

This way, the actual code can handle handle most, if not all exceptions and errors at the "CLI level" and expose meaningful error codes to the "API" that the UI uses. At least that's how I'm trying to build my first bigger software project (an image viewer).

Again, I'm a fairly inexperienced programmer, so this might be a foolish suggestion, but I try to give much thought to the way I'm building software. Maybe even too much, I'm kinda prone to over-engineering.


> At least JavaScript doesn't have typed catches. Holy black mother of darkness, that shit is intolerable.

Well, if his complaint is about untyped catches, well, I guess they must suck. But if he thinks that typed catches are somehow worse (hint: they're better), he is wrong.

> But still, nothing is as bad as the common "On Error Resume Next" that so many terrible VB programs start with.

Ok, everyone who always checks the return value of printf() in C, raise your hands. That's what I thought. The semantics of C is that whenever something fails, you just continue from the next line (statement, whatever...) even if the world is burning. Try-catch may suck, but checking for return values sucks more (in languages without pattern-matching), deal with it.


Note to everyone: don't confuse 'try-catch sucks' with 'exceptions suck'.

There's no inherent reason an exception-throwing method can't be invoked in a style that gives back a value/exception pair. That limitation is a property of the specific language, not all languages. For example: exceptions could be caught succinctly if method invocations prefixed with 'catch' returned a value|exception tuple (`v, ex = catch ParseInt("test")`).

The primary difference between exceptions and error codes is the default behavior. Error codes default to 'ignore' and exceptions default to 'propagate'. The rest of the differences (requiring acknowledgement, succinctness of handling, availability of stack traces) are more of a coincidence based on the common decisions in languages like Java, Go, C, C++, python, etc.

I think the main downside of existing exception implementations is they discourage programmers from defining useful error cases for functions. It's so easy to propagate or suppress that anything else feels like massive busy work. Simple improvements would be to make wrapping exceptions easy and defining new types easy (throws FileNotFound as forge MissingNetConfigFile).

On the other hand, the main downside of error code implementations is the ease of ignoring. Go has a very good idea in the "x, _ = funcWithIgnoredError" style, but it extends poorly to otherwise-void methods like 'flush stream'.


An interesting read. It's always nice to see people stating their unpopular opinions with proper reasoning.

The author's point of view seems to be pretty Javascript/Node.js -centric. I can relate. try/catch and node.js -style asynchronous coding do not work together. However, I'd say that it's the async programming model that is broken, not try/catch.

Writing code in node.js async style is very difficult for humans to do (at least for me), but compilers are excellent in this type of program transformations (continuation passing style). The Haskell and Erlang compilers and runtimes do this automatically. You write regular imperative code but it's executed in an asynchronous manner using co-operative fibers (maybe together with native os threads).

A similar solution is possible with an interpreted language that has some kind of continuations. Every blocking system call could be translated to a co-operative fiber context switch using kqueue/epoll.

What is really baffling to me is that none of the popular interpreted languages (javascript, python, ruby) seem to have decent continuations. I don't know of a reason for not implementing continuations in an interpreter, is there any? Using continuations, one could implement pretty nice async operations and exception handling.

Using return values for error handling is really painful. I've been writing a system call heavy C program, and about 40% to 50% (in LOC) of the code is error handling. In test code coverage, I can reach just over 50% branch coverage as most error conditions don't happen while testing. I don't know of a nice way to make e.g. the network fail at the right time to test my error handling in that case.

C++ is pretty nice when it comes to error handling and exceptions as resources are cleaned up on error. Python's with..as statement or Haskell's Control.Exception.bracket can do the same thing (the latter is just a regular function, not a language builtin). In C, most error handling is required to free some memory you allocated earlier, but even in a garbage collected language you still have system resources you need to free (file handles, sockets, db connections etc).


>> However, I'd say that it's the async programming model that is broken, not try/catch.

This is an interesting point of view, however I'm inclined to disagree on the basis that the async model is representative of how things actually happen; the imperative model is not.

In an asynchronous architecture, the developer is concerned with only the current state and the set of all events that may cause a transition from that state. Whether or not an event is the result of an error condition is irrelevant, one simply invokes a different handler.

I used to consider the asynchronous model much harder to program in, that is until I changed my thought process to work in terms of state machines. Then it actually became much simpler to program in this model since, at the handler level, one has no preconception about what should happen next, only what can happen next and how to handle each of those situations.

>> C++ is pretty nice when it comes to error handling and exceptions as resources are cleaned up on error.

There has been many a discussion about the benefits vs. pitfalls of C++ exceptions. I personally am not a fan since the lack of garbage collection means that there are situations that require lots of tedious and error prone boilerplate code just to ensure that all resources are cleaned up.

I have to admit that perhaps the nicest way I have seen of handling failure is in the concept of Monads in Haskell (and other functional languages). The ability to add the context of failure as a possibility in any computation, without requiring that computation to explicitly consider it is extremely elegant. Add to that the safety of always returning a well defined value that is fully type checked by the compiler and I believe you have a recipe for very effective error handling.


"This is an interesting point of view, however I'm inclined to disagree on the basis that the async model is representative of how things actually happen; the imperative model is not."

Have you used Erlang or Haskell? I have heard this line of defense a number of times, but always from people who have to answer "no".

The claim is that it's better to let the compiler handle the issues. You're response is basically that since you learned how to more properly run what the compiler does in your head, you've done better. I would submit that's support for the idea that a compiler should be doing the work, not evidence against. What if you could skip the part where you didn't know how, then skip the part where you had to learn better, and then skip the part where you're running this algorithm in your head, and just let the compiler do it for you from day one, and probably better than you can do it even with experience?

(Humans suck at maintaining invariants. It takes years of experience to even be sort of good at it, and you'll still be terrible compared to a compiler.)

While you should know what the compiler is doing to your code, this is hardly any different than manually implementing the C stack on every function call. The fact that it may occasionally let you do something clever doesn't get around the fact that you really shouldn't be thinking about this on every single function call at all.


> I personally am not a fan since the lack of garbage collection means that there are situations that require lots of tedious and error prone boilerplate code just to ensure that all resources are cleaned up.

The problem is memory isn't the only resource that needs to get cleaned up, and at least in Java, when a variable falls out of scope resources such as database connections, opened files, etc, will not instantly be cleaned up.

However, with c++, as long as you're using RAII (and if you're not, why aren't you?), all a resource has to do is fall out of scope to clean it up. RAII is certainly cleaner and less tedious than anything I've seen in Java and C#.


>> "This is an interesting point of view, however I'm inclined to disagree on the basis that the async model is representative of how things actually happen; the imperative model is not."

You're absolutely correct in that many operations are executed asynchronously. However, I think that it shouldn't be the model we use to write programs. Programming is all about abstractions and the imperative model is a good abstraction which can be implemented effectively by a compiler and a runtime system.

Sure, a program can be composed by maintaining a state, which changes when events occur. I just feel that it gets really hairy really quickly when you start to have several IO operations in sequence and have to start thinking about multiple error conditions. With exceptions and imperative code, you can write the IO operations one after another and catch the exceptions where they can be handled (e.g. show an error dialog in the GUI). With hand-written CPS async code you have to assign an error handler to each IO call, even if you'd want to have only one handler for several error conditions.

>> "I have to admit that perhaps the nicest way I have seen of handling failure is in the concept of Monads in Haskell"

I think that failures are not handled with the monad failure function any more in Haskell, but exceptions are use these days (a quite recent addition to Haskell language). A catch can only be in IO code but throw can be in either monadic or functional code.


I think the real problem here is that JavaScript/Node.js is constrained by language choice to a hybrid continuation-passing/call-return style. As straightforward as exceptions are in call-return, they are even simpler in CPS -- the exception handler is just a second continuation. It's literally just a standard-form onError parameter.

The problem is that you can't write your whole program in continuation-passing style in JavaScript, because its call and exception mechanisms aren't designed for CPS. There is no tail-call optimization, and you can't configure exception handlers manually. That's a valid choice under the right circumstances, but it breaks CPS.


Bruno Jouhier's response nails it. It's funny, but for whatever reason I almost always disagree with Isaac's (perfectly reasonable) points of view.


The best part about exceptions (unchecked), is that they allow an exception to automatically bubble upward to the level of code that actually should be responsible for taking action, without having to write a ton of boilerplate code all the way down the call chain.

Even better, when stack traces in exception logs are examined, it's easy to see where in the call chain things are going wrong.

During a database transaction for example, something might go wrong 5 method calls deep, and I should roll back the transaction.

If I'm forced to either use checked exceptions or use if statements to check whether there was success at each level, then that's a lot of repeated boilerplate try/catch/rethrow or if(success) code which unchecked exceptions free me up from having to write.

If all I receive at the transaction level is a failure code, then what do I log, other than "something went wrong?!". With exceptions, all the work is done for me. I just log the exception stack trace and I can easily see what needs to be fixed.

Much of the time, exceptions are useful because they are thrown becuase of something you didn't anticipate happening, as opposed to something you had planned for when writing the code. And when that unanticipated thing does happen, there is nothing more beautiful than an automatically generated stack trace telling you exactly what happened.


Exceptions vs error codes seems closely related to dynamic vs strong-typing ala Haskell.

If you use error codes exclusively, then it's possible to do much stronger reasoning about all code paths, and you can create a more robust system. It doesn't have the magic of the static analysis that Haskell's type system enables, but there is a certain "purity" in that no function can have an error pass through it in the call stack without being aware of it. This is exactly what you need in systems programming like an OS kernel where every conceivable thing will go wrong and your goal is to handle everything gracefully.

On the other hand, when writing business apps, you not only have the technical concerns of the system, but your program is primarily a big hairy rat's nest of business logic which may never achieve well-definition. In such circumstances, you never get to a point of stability where you can be so pedantic about the technical details because you have your hands full just trying to meet the ever-changing business requirements. Exceptions are a great help here because they allow you to handle new errors with minimal ceremony and function redefinitions. Just like Ruby duck-typing, this means you can refactor faster, but obviously in a dramatically less robust fashion than Haskell. But if you're operating in an environment where a logical failure here or there is acceptable and code has a short half-life, then you'll get better ROI from duck-typed, unit-tested, exception-notifying code than you will from nearly-impossible-to-fail statically-typed code.

A lot of code probably falls into a gray area where it becomes largely a matter of programmer taste.


This reminds me of the Qt framework which is C++ but doesn't make any use of exceptions.

Before I worked with Qt, I never thought this was possible without sacrificing the API, but the Qt API is very clean and doesn't seem to suffer from that design decision. So I looked around in the Qt API for some time, trying to learn how they managed to get along without exceptions. And I think I finally got it.

The whole error handling in Qt mostly boils down to providing sensible null objects. That is, instead of using NULL pointers (as in C) or generic null objects (as in JavaScript), for all kinds of objects there are specialized null objects which behave sensible to the most possible extent.


The core of the post:

  try/catch, which blurs the line between errors
  that are *mistakes* (accessing a property of null,
  calling .write() on a stream after .end(), etc.), 
  and those which are expected application-level problems
  (invalid data, file missing, and so on).
Totally agree.


In a typed exception system, you just assign different kinds of exceptions to all of those. Java also distinguishes between checked exceptions ("known unknowns") which need to be declared, and runtime exceptions ("unknown unknowns"). Sure, you can still catch all of them in one place, but that's very rarely a good idea.

A non-typed exception system (that doesn't solve the problem with another mechanism) just sounds like a very bad idea. Is Javascript like that?


It totally depends on your Application. I wouldn't call a non existing file referenced by a db record, an strange log entry or a malformed xml returned by a webservice an "expected application-level problem". There is absolutely no way for a library writer to decide between an expected and an exceptional condition.


This is the standard in many languages and one that I believe that java got wrong. It was a little rushed with the whole concept of forcing everyone to handle all exceptions concept and building so many exceptions in for silly things like connection failures and parsing errors that reasonably can be expected to happen constantly in the normal runtime of an application.

I follow the rule in almost every language that my app should still be able to run normally if all try catches were remarked out, otherwise I should be handling something better. Exceptions are expensive and don't always back out nicely when they unwind in most languages.

In Objective-C, we play C rules more often then not and almost never use exceptions (except for assertion exceptions). Rather most errors have a ( out NSError ) pointer if they need to pass up errors.

In Python, it's more pythonic to ask for forgiveness than to check before hand so exceptions happen all the time. I'm not sure how I feel about that though but it seems to work well enough.


> It was a little rushed with the whole concept of forcing everyone to handle all exceptions

Just to clarify, Java doesn't force you to handle all exceptions (maybe it did at one point in time?). Exceptions which inherit from RuntimeException are unchecked and you can choose whether to handle them or not. Exceptions which don't inherit from RuntimeException are checked


That being said, I'd like to quote these passages from the official Java tutorial:

"Generally speaking, do not throw a RuntimeException or create a subclass of RuntimeException simply because you don't want to be bothered with specifying the exceptions your methods can throw.

Here's the bottom line guideline: If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception."

Anyway... you don't need to neccessarily handle exceptions even if they are checked (in cases where it doesn't make sense that your code handles them). Add a 'throws' clause and let the calling code handle them.

So the poster of the grandparent comment really got it wrong when it comes to Java and exceptions.


> you don't need to neccessarily handle exceptions even if they are checked (in cases where it doesn't make sense that your code handles them). Add a 'throws' clause and let the calling code handle them.

The problem with this is it affects the signature of all methods in your call hierarchy up to the point where it's handled as you're forced to declare that you throw exceptions. A simple change in one class could quickly become a breaking change involving several classes just because an exception can now occur.


Ruby also gets it wrong in several common cases. The IO libraries throw exceptions on expected events, for instance, and there are a couple of places where I flat-out disagree with the Exception class hierarchy.


''Try/catch is goto wrapped in pretty braces.''

From this I can infer that lambda is also an anti pattern.

But as good as this straw man post is, yes you can mis-use exceptions, No that doesn't make it an anti pattern.


Glad I wasn't the only one thinking of Lambda: The Ultimate GOTO


I think the best is a little of both ways. For instance in python:

  x = some_dict['meh']
Will raise if 'meh' doesn't exist. If you believe that 'meh' should be there in your program, it's fine to 'let it raise an exception'.

However, if 'meh' could be there, it's better to use an error style such as:

  x = some_dict.getDefault('meh', 'some-neutral-value')
And continue without raising because there's no need to raise as there's nothing exceptional here.

Ideally, raising an exception should have the meaning "Hey, something is wrong here and I don't know what to do next". And someone in the call hierarchy would handle this and say "Oh, the connection stopped.. that's why it's not working. I'll reconnect and call you again".

And note that this "parent handling" might be way higher than where the problem occurred..



No he doesn't. defaultdict provides a default value for ALL missing values. His approach allows him to target one key in particular, and is a very common python idiom.


Wops, you both are right, thanks for pointing it out. I meant to say dict.get, i.e.

  {'a':1}.get('b',5)


I'm confused - where does that come from?

  >>> x.getDefault(1, 'Que?')

  Traceback (most recent call last):
    File "<pyshell#2>", line 1, in <module>
      x.getDefault(1, 'Que?')
  AttributeError: 'dict' object has no attribute 'getDefault'


Because it's actually (somewhat confusingly, called setdefault).

    >>> a = {1:'foo',2:'blah'}
    >>> a.setdefault(1,'default')
    'foo'
    >>> a.setdefault(4,'default')
    'default'
    >>> a
    {1: 'foo', 2: 'blah', 4: 'default'}
This actually updates the underlying datastructure. Very handy when you want to deal with nested structures, since you can do something like:

    matches.setdefault(keyword,[]).append(s)
This will append s to the list in matches[keyword] if it exists, or set matches[keyword] to [] and then append.


OT: Was it always like this that the top 170px are just fixed on groups.google.com and thus are somehow wasted from the horizontal space?

http://www.az2000.de/pics/screenshots/Screen%20Shot%202011-1...


OMG. That is a horrible UI. It makes Groups totally unusable. Half of my laptop vertical screen is the unscrollable fixed panel with a few big buttons and the search bar. Then 1/4 of the horizontal screen is fixed with the left navigation links. The actual content are crammed on the lower right half of the screen. The bottom arrow of the scrollbar is missing. Please Google, not everyone has 24-inch vertical monitor.

It used to be that Microsoft's website took up lots of fixed upper screen real estate for "branding." Now they have fixed it and is much more usable. Have the Microsoft designers gone to work for Google now?


At my (admittedly meager) 1024x768, between browser and now the google groups redesign, ui area takes up almost half of the upper space before any content is shown.


It's a new GWT based design


Isaacs is spot-on, and this is one reason why I love Objective-C. NSException is for programming errors, which should be rare. You can use try/catch, but you almost never do. NSError is for expected application-level errors.


This is exactly what I came here to say. NSException and NSError combined with the fact that you can send messages to nil objects and just get nil objects back lead to much "cleaner" code in my opinion.


I think he has a really interesting POV but the conversation as a whole is definitely a very informative dialog on the value of try/catch in an async environment. As they say, read the whole thing...


I think a lot of programmers use try/catch when what they really want is Prolog-style failure:

  my_parser(String, Output) :-

    parseJSON(String, JSON),

    extract_the_values_I_want(JSON, Output).

 if my_parser(String, Output) then

  ...do stuff with Output...

 else

  ...show user "couldn't parse" message...
I've been using Mercury (a strongly typed Prolog) for years now and have never once found the need for exceptions (which Mercury supports) to indicate anything other than catastrophic failure.


What are your motivations for using Mercury? Are you using it for hobby projects, research/academic purposes or actual production code?


Hobby projects and hobby research. It has some pretty intense features if you're into type theory.

IMO the library support isn't yet there for production code (e.g. no or only rudimental GUI/networking code). But if you're just using it for computation (e.g. AI or compiling or some such) it's pretty solid, and its FFI is the most straightforward I've ever seen.

FWIW I hear that these people: http://www.missioncriticalit.com/ and these people: http://www.princexml.com/ use it for production code, but I don't know much about them, beyond that Håkom Wium Lie (of CSS and Opera fame) is the Director of the latter.

I maintain a blog about Mercury's features for interested users: http://adventuresinmercury.blogspot.com/ if you're interested.


I'm definitely interested. Thanks. I've now bookmarked your blog.


The way I see it exceptions are helpful in creating self documenting code if used thoughtfully and effectively.*

Some might argue subclassing the base exception to create more specifically named ones is a bit silly, because you may be doing little more than renaming a class several, maybe many, times. But it may be countered that this is simpler than determining a list of error codes and then leaving it to other people to find out what a random string or integer even means. I personally haven't found this helpful for debugging.

Both ways are better than something just returning false and leaving you to figure out why it did that in the first place.

The main benefit to me, however, is to be able to throw the exceptions deeper in the code (where appropriate) and to be able to then catch them at the very last moment. While not perfect (I can't account for everything), it allows me to keep my error handling code cleanly separated from the rest of it at the most abstracted point. Anything lower level will by necessity be a little messier.

eg. not wrapping an entire* script in one try/catch block.


I dont understand this. According to some believe exceptions should be used in an exceptional context. Can anybody define what exceptional is?

I use exception to get the context in which the error happend. Call stack, variables, stuff like that. I can log it, i can give good error message to the user. I never think about if an error is exceptional.


> Try/catch is goto wrapped in pretty braces. There's no way to continue where you left off, once the error is handled.

...

> But still, nothing is as bad as the common "On Error Resume Next" that so many terrible VB programs start with.

Assuming "On Error Resume Next" does what I think it does, you can't complain about both of these at once.


"On Error Resume Next" does not do what you think it does. It ignores the error, and marches blithely on. That is very different than "continuing where you left off, once the error is handled."


One of my personal pet peeves is seeing: "try: foo() \ except: pass" in code.


I think a good rule is that if you're not planning (or able) to confidently deal with an exception and put the app into a known state, then don't catch it.

Some developers seem to have a fear of allowing any errors to be seen by the user and so they have a habit of swallowing exceptions. I guess it makes them feel that the code is more stable, but it actually masks bugs and makes them impossible to troubleshoot. I have a name for these - I call them "insidious bugs" especially when they result in data loss, which is common with those types of bugs.

I was just in some code the other day that was littered with a bunch of these:

try { ... code here ... } catch(e) {}

not even a console statement or anything! It took two of us over 6 hours chasing down what should have been a 15 minute bug due to that code.


The problem with try/catch, really, is that it makes a tremendous amount of visual noise, which makes sense for critical operations that might crash everything, but just don't make sense when you can tolerate certain errors or checking for proper output with with a conditional looks better and is more appropiate.

Obviously then you have an issue with language conventions. How do you check for the returns of a method that does an in-place modification without returning anything? What happens when Null is a proper return value? What about having to access specific members or registers to check?

Perhaps, much like logging frameworks, we may need to categorize throwable actions by their importance, or think of a new convention that can handle such things more easily.


But what is your alternative? Doesn't checking the return code of every statement also make a tremendous amount of visual noise?

I've seen C code where about 80% of the code is dedicated to error handling and corner cases. Before every statement it has to check the current error status, and after it the return value... it's almost unreadable.

With try/except/finally a lot of that can be cleaned up, by handling the errors higher-up in the call hierarchy where they make sense to handle.

I see a lot of critique of exceptions, but haven't seen one better alternative yet, at least one that doesn't require switching to an obscure programming language.

Edit: this does assume that exceptions are used properly: for errors that should bubble up, not as extra return value.


try/catch, which blurs the line between errors that are mistakes (accessing a property of null, calling .write() on a stream after .end(), etc.), and those which are expected application-level problems (invalid data, file missing, and so on).

While I tend not to be a big fan of try/catch myself, one thing I like about Objective C and Cocoa is that they at least provide classes to separate programmer mistakes (NSException, used with try/catch) from application issues (NSError, used however it best fits the app). If the try/catch pattern must be used, I think its sensible to have this kind of separation of responsibilities.


How on earth can types catches be worse than untyped catches? As a Pythonista, this whole argument makes my head hurt. He's railing against everything considered beautiful about exception handling in the Python community.


"The really nice thing about the cb(error, result) approach is that it constantly reminds the programmer that you must acknowledge that failure may come from any request you make."


As we say in C++ "exceptions should be exceptional".


Exceptions can make your code much shorter and readable by the virtue of centralized error handling. However, try-catching can be pretty ugly in cases when you need to handle exceptions right away. D language has scope guards to help this. I think we need some kind of better syntax to handle cases like this, but getting rid of exceptions entirely is a bad idea.


Try/catch may be an anti-pattern in a dynamically-typed language, but not in a static-typed language. In static languages they're important and valid because they help avoid the "returned a null what?" question.


Exceptions aren't the one and only solution to 'null' issues in Java et al.

http://blog.orbeon.com/2011/04/scalas-optionsomenone.html


Anyone have an opinion with regards to Ada is this context? From what I've read from Ada it tries to do soft errors as a rule.


It's only an anti-pattern in languages that don't have resumable conditions (i.e. other than Common Lisp).


I'm not sure how I'm supposed to read this... I guess I need an account?


Sometimes Google Groups ask for an account, sometimes don't -- not sure why. Anyway, here's the post: http://pastie.org/2861111




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: