Hacker News new | past | comments | ask | show | jobs | submit login
Raising exceptions or returning error objects in Python (lukeplant.me.uk)
44 points by EntICOnc 59 days ago | hide | past | favorite | 61 comments



> If you instead raise exceptions, you are immediately forcing the calling code into a special control flow structure, namely the try/except dance, which can be inconvenient.

I fail to see how a try/catch is more inconvenient than a handful of isinstance calls.

What happens when you want to add a new error case? With exceptions, the exception will always be thrown. With error objects, you're going to fall through a bunch of isinstance checks and continue on with a result object which isn't the type you think it is. Then, some time later, youre going to try and reference a field that doesn't exist. This is the kind of thing which is a nightmare to debug.

I do appreciate the author's point that error objects are nice in functional programming languages. But using them in Python because you like using them in fp is not a good justification.


More importantly is the ability to subclass exceptions, so it doesn't break client code.

    class MyBaseException(Exception):
    ...

    class MyExistingException(MyBaseException):
    ...

    class MyBrandSpankingNewException(MyBaseException):
    ...
The client can catch the base exception no matter what new exceptions you create in the future.


This design claims MyBrandSpankingNewException was:

* So different we just couldn't represent it as MyExistingException and yet

* So similar that it's fine to just treat it as MyBaseException everywhere

In my experience everywhere I see this pattern all we're doing with MyBaseException is giving it to a human, via logging, stderr, or whatever, which means this is a lot of complexity to just move some text.

Although this strategy didn't formally "break client code" in practice if it matters the code will need to be rewritten to actually handle MyBrandSpankingNewException.


Usually a derived exception is a special case of the base exception e.g., OSError -> FileNotFoundError. Sometimes you want to handle specific error differently and sometimes you don't care. btw, it may be the reason to create a new subclass, to distinguish between cases when you do care.


I don't think typed errors + isinstance checks are really the alternative.

As of Python 3.10, I can just write the same Result-monad-ish destructuring-pattern-matching code in Python that I do in e.g. Elixir:

    def foo():
        if whatever:
            return (True, some_data)
        else:
            return (False, some_error)

    def use_foo():
        match foo():
            case (True, good_data):
                ...
            case (False, ExpectedErr() as err):
                ...
            case (False, any_unexpected_err):
                ...
Why bother? Because often, you'll write functions where both "good data" and "bad data" are just data — for example, a classify_spam() function returning (category, confidence_score). Neither case is exceptional; both cases are expected. But you have to handle them separately.

A great example would be an HTTP request-response:

    match requests.get(...):
        case Response(status_code=code) if code >= 300 and code < 400:
          # do a redirect
It would make absolutely no sense for 3xx responses to raise an exception, no? But you do want to handle them separately from the regular flow of code. You want to be dealing with a categorized return object, where you're forced to recognize the category of it — to match on it — in order to do anything further with it.

(I do wish that the default for `match` was to blow up with a ValueError if you run off the end, rather than being a no-op, though. As it stands, you have to explicitly add `case _: raise ValueError(...)` clauses if you want to be strict.)


The problem is the overloaded "some_data" / "some_error" return value. It's arguably worse semantics than just using exceptions.

    try:
        data = foo()
    except SpecificException as err:
        ...
    except BaseException as err:
        ...
    except Exception as unhandled_exception:
        ...
It doesn't really look that different than your code above.

Further:

    return (False, some_error)
Would be arguably better to write:

    raise NoBarError(f"Bar wasn't found in file {f}")

> match requests.get(...):

Speaking of requests I actually like the requests API here. It always gives you a response object, but allows you to do this:

    resp = requests.get(...)
    resp.raise_for_status()  # raises an exception on 400 and above.
It doesn't force you into exceptions, but allows you to choose how you want your errors. The reason is because resp.content may still be meaningful even if resp.status_code is 500.

Further it gives you:

    resp = requests.get(...)
    if resp.ok:
Error and data are not overloaded on the return value. They are a part of the same class.


> The problem is the overloaded "some_data" / "some_error" return value.

My point was that it's not "overloaded", because tuples returned for the purposes of matching aren't product-types (i.e. essentially objects but with unnamed positional fields, where the same position has the same meaning regardless of the other positions' values) but rather "mutually-exclusive-pattern-clause-matching conventional sum types" — the pure-data equivalent of the Result monad's `Left(Data) | Right(Error)`. Think Erlang's `{ok, Data} | {error, Error}`.

With tagged tuples, you're not supposed to attempt to work with the tuple's fields independent of the whole — i.e. doing tuple[1] is a design smell, because it works regardless of whether you've got the tuple you expected or not. Rather, you're supposed to recognize the pattern that the tuple represents, including the tuple's "shape", and any specific literal tags; and then destructure the result into variables only if the pattern fully matches. Tuples and pattern-matching go together. (And it's kind of weird that Python got one long before the other.)

Note that there's no real difference between using tuples here, and using plain data-carrier classes. A type-tag as a positional tuple parameter is not trying to do anything different than an object with an explicit type is; tagged tuples just avoid needing to create little bespoke data-carrier classes for every possible little returning-an-enum-of-patterns situation that crops up a thousand different places in a codebase.

> Would be arguably better to write: [raise the error]

Why? Just because a piece of data "represents an error", doesn't mean that it's a programming/logic/operational/runtime error in the semantics of your system. It's just data you're carrying around, that represents a certain condition. Maybe a condition you're modelling in some other system. Maybe a condition that's equally common to the "data" path.

Picture, for example, implementing an HTTP caching gateway, like Varnish. Receiving a 4xx/5xx response from the origin isn't an error for Varnish — rather, it's an entirely expected condition. An "HTTPErrorResponse" is just a type of data that Varnish works with — caching it, serving it, etc.

I recently wrote a Python function called is_domain_trustworthy. It returns True | (False, reason_class, reason_detail). In theory, this function could raise these as errors... but it's not a validation function; it's a predicate. It's answering a question. It wants to answer that question with a yes or no; it's just that in the False case, it also has some extra detail it can pass along about why it decided the domain wasn't trustworthy, which can be used for debugging / audit logging. This "yes or no-and-why" answer to the question is just data. That computed data gets put into a database, even.

You can certainly build a validation function on top of this predicate. You'd do so by writing a pattern-match, where one or more arms of the pattern-match raises an exception.

But personally, I find it extremely useful to have the "plain" version of the function not raise, but just give me data. More flexible in code; much simpler to work with in the Python REPL; etc.

> resp.raise_for_status(); resp.ok

A difference with these, is that these are fundamentally examples of a library author trying to predict what the library's users might want to treat as semantic error conditions.

Mutually-exclusive pattern matching, on the other hand, doesn't require that the data is modelled to "bake in" any knowledge of conditions users might care about. You can arbitrarily group patterns together. For an example:

  match resp:
      case Response(status_code=code) as long_resp if code in [200, 500, 502, 503, 504]:
          # these requests take a nontrivial amount of time, so capture the request  in a histogram
      case _:
          pass
Note how there's nothing on either arm here that's anything like an error condition. It would never make sense to have a library know about this condition, let alone to treat it as an exception. It's just a pattern, where matching that pattern means your data now takes on some additional meaning.


Error classes will always be second class citizens in python and you'll lose them if a first class exception is thrown.

Worse people using your code now have to have two kinds of patterns dealing with errors: exceptions and tuple unwrapping, lest you can guarantee your code will never raise exceptions.


> lest you can guarantee your code will never raise exceptions

I mean, if you're writing a predicate function — or any other function that "just returns data" — then yes, you certainly should guarantee that it never raises an exception†. It should be a pure function. Anything else would be extremely annoying.

How Python people deal with the fact that simple things like dictionary key lookup or URL parsing raise exceptions without thinking of that as a language-design error, I'll never understand. Give me a language where functions return types like Option that have to be explicitly acknowledged and handled exhaustively, any day, over Python's "code written only for the happy path that nevertheless works without warnings, right up until it ever gets fed an invalid input, upon which point it crashes, because nothing ever forced you to handle that error, or even indicated that you would want to."

† Or, at least, it shouldn't be throwing any exception your caller would ever expect to need to catch as part of the semantics of using your library. What Python calls "errors" and Java calls "unchecked exceptions" — e.g. MemoryError and the like — are fine to let "leak out" of pure functions, because those exceptions aren't really intended for the direct caller to handle; they're intended to skip right past the caller, to whatever top-level framework code created the thread that the caller is running in. (See also: Haskell's `error` function, which, despite ML-alikes heavily favoring sum-typed return types, nevertheless exists, for just these same cases.)


I'm not arguing against the power of the error classes. I argue that you're turning python into golang, whose error handling is arguably worse than python or Java.

> How Python people deal with the fact that simple things like dictionary key lookup or URL parsing raise exceptions without thinking of that as a language-design error, I'll never understand.

"if "x" in dict", or "dict.get("x")" works really well. But again, if you want the exception, it's there. "dict["x"]" will raise.

You also have sets in python so you can do stuff like this:

    potential_keys_in_d = set(d.keys()).intersection(potential_keys)
    potential_keys_not_in_d = set(potential_keys) - set(keys.d())
A lot of production code uses urlparse() and it raises ValueError. https://docs.python.org/3/library/urllib.parse.html#urllib.p...

> Or, at least, it shouldn't be throwing any exception your caller would ever expect to need to catch as part of the semantics of using your library.

So your code does 1/0. What should the language do? raise an exception? return NaN? Panic? Core dump?

I ask because 1/0 is perfectly recoverable in most cases. In C it's undefined (core dump anyone?). Golang panics. Python and java raise an exception. And javascript returns ... well ... something worse.

> What Python calls "errors" and Java calls "unchecked exceptions" — e.g. MemoryError and the like — are fine to let "leak out" of pure functions, because those exceptions aren't really intended for the direct caller to handle.

From the standpoint of a being a pragmatic programmer, the language just needs to choose one error channel and let that be the primary form of error passing.

Golang has two error channels, panics and return values, and I'd argue in this respect it's worse than Java.


> A lot of production code uses urlparse() and it raises ValueError. https://docs.python.org/3/library/urllib.parse.html#urllib.p...

Yes, that was precisely my point: I don't understand why people are okay with this. URLs being invalid isn't an error; it's expected. If you're constructing a URL from a string, you haven't yet validated the URL. So why would it be exceptional for the validation to fail? I would expect explicitly calling a parser function, to give me the sum-typed result that the parser produces: either the successfully-parsed data, or the relevant shift/reduce failure.

Consider: when I call urlparse(), I'm not demanding that the runtime construct a URL from the string. The function is not called `URL()`, like it is in Java (where IMHO it becomes perfectly justified to throw an exception — you expect `URL()` to give you a URL.) Calling a parse function like `urlparse()` does not imply that the thing I get back is going to be a URL. It implies that it's going to be the result of parsing, whatever that is. Just like, when I call `requests.get()`, I expect that the thing I get back is going to be the result of the request, whatever that is.

> I ask because 1/0 is perfectly recoverable in most cases.

It is recoverable, but only locally. No caller has any idea what to do with the fact that the callee divided by zero. It's a case where doing something and then not immediately handling it in the same local scope, is a programming error on the part of the library author.

> What should the language do? raise an exception? return NaN? Panic? Core dump?

Certainly, in dynamic languages like Python, programming errors in libraries have to be dealt with somehow. But it doesn't really matter what particular mechanism they use, because they shouldn't be recoverable by the caller. The callee failed the runtime's preconditions — like doing a use-after-free in C. The whole runtime is now poisoned, because the library, failing to handle this failure, very likely left its internal state in some indeterminate half-mutated form.

This is just a too-late equivalent to a static language's "this fails to compile." So aborting the entire runtime — or doing anything else that means "this is unshippable as-is" — is probably the right choice. (Maybe delete any generated .pyc file for the module the code was defined in, while you're at it.)

Maybe use a mechanism that can be caught by the runtime itself — if the language abstract-machine includes some sort of mobile-code-execution-sandbox abstraction like V8's isolates. Because programming errors in sandboxed user code shouldn't kill the host-runner. But that's the only case I can see where it'd be relevant.

Ideally, in general, a dynamic language should do enough static analysis at load time to detect programming errors, such that it can have `import` itself raise, refusing to poison the environment with a module with axiom-invalidating semantics.

(Though, in the case of integer division specifically, we've really just screwed ourselves over in most languages by making it a binary operator with a single integer result upon untrusted input. IMHO languages could do a lot better, by 1. having an explicit "integer divisor" linear-type that cannot be zero, and only allowing integer division by that type, such that you have to construct that type first; and/or 2. making integer division by regular can-be-zero integers a branching operation, where you must always define the code path for when the divisor is zero.)

> Golang has two error channels, panics and return values, and I'd argue in this respect it's worse than Java.

I disagree. Golang has one "error channel" — return values. You should never see/catch a panic from a callee. You should be able to just forget panics exist — similar to Errors in Java, or `error`s in Haskell.

Golang panics have two uses:

1. the same thing I described above — unrecoverable operational states and runtime-detected programming errors in libraries. Abstract-machine axiom violations. Callers aren't supposed to handle/`recover` these. Nobody's supposed to touch these; they're supposed to just let them bubble, until the program halts. It doesn't matter that these are implemented via `panic` — they could just as well not bubble at all, and terminate the process on the very instruction they're raised on. They just-so-happen to use `panic` so that deferred functions get called for cleanup, so that sockets aren't left hanging open, files get flushed to disk, etc.

2. structured goto — like Ruby's `throw`+`catch`, (which have nothing to do with its exception system, which uses `raise`+`rescue`), or Common Lisp's `call/cc`. Golang JSON parsing, for example, uses `panic`+`recover` as a way to break out of parser stack frames until it hits the toplevel frame of the parse routing; so it can then immediately return a parse error to the caller. These are never supposed to leak outside the design of a library; so you're not supposed to be handling these, either. Also, this isn't error handling, per se; this is an optimization over returning a condition, then checking for that condition in the caller and breaking out of a loop and returning the same condition further, etc. It could just as well be used to signal a condition like "I'm done, I found what I'm looking for" as "I failed to parse." It's infrastructure, mechanism, that says nothing about the semantics of what it's trying to do or what type of information it's carrying. (Personally, I would argue that this should have been given a separate name from `panic`; and that `recover` should have only been able to catch these, not the axiom-violation panics of #1. But I understand that Golang's designers are big fans of avoiding special-purpose semantic primitives in favor of general-purpose "mechanism" primitives — as can be seen in their previous design of e.g. Plan9's everything-is-a-filesystem "mechanism" primitive.)


> URLs being invalid isn't an error; it's expected.

Of course it is. Would you report back to the user: "Expected Behavior: Invalid Url."? I bet instead even you would write: "Error: Invalid Url".

>> I ask because 1/0 is perfectly recoverable in most cases. > It is recoverable, but only locally. No caller has any idea what to do with the fact that the callee divided by zero.

f(x) = x+1/x shouldn't crash your space station life support because it's undefined at 0. Just like tan() is undefined at pi/2. As you said, it's expected.


> I don't think typed errors + isinstance checks are really the alternative.

It's the alternative proposed by the article, because the code from the article is 5 years old. The author's claim is that calling isinstance is better than exceptions.

Pattern matching makes error objects nicer, but still doesn't make them compelling for the general case.

There are cases, like HTTP requests, where it makes sense to embed status in the response object. But, critically, every http response has both a status and some content. Every response object is the same type and can be treated in the same way.

The article's proposal is to return either a string, or one of a handful of error objects.

I will grant that some APIs are nicer when you embed status information in the response type. That is a special case. Preferring error objects over exceptions in the general case is wrong, IMO.


I think things become a lot clearer when you unpack which APIs those "some APIs" are.

The case where sum-typed return types (because that's what we're really talking about here) truly shine, isn't in handling "errors" in the sense of invalidated preconditions that you need the caller to handle.

Rather, the case where sum-typed return types make sense, is in handling condition states more generally.

When I say "state" here, think "finite-state machine." Or actually, maybe think "pushdown automata." Imagine a sum type representing the state of a door, like:

    Open | Closed(door_tag: DoorTag() | None) | ClosedAndLocked(required_key_bidding)
If you call door.try_open(), it makes perfect sense to return this sum type. This type represents the new state of the door, after your attempt. None of the types are exceptional per se, because this is just an attempt to open the door. You're equally happy to have the door in any state afterward. "Open if you can", per se.

Certainly, you could write another function door.open() † that does throw an exception if the door cannot be opened. It might also raise a DoorAlreadyOpenException. But this function makes more sense to implement on top of the first function, than the other way around. Door.try_open is more core to how the door models its state and reacts to input, than Door.open is.

† In other languages, the naming-complexity expectation would be reversed — the non-raising variant would be `Door.open`, while the raising variant would be something like `Door.open!`.


Yeah; this proposal is pretty painful-looking to me, as a Python programmer.

How do you do finally logic with this pattern?


well it's simple, really. Python just has to add a defer statement to take care of that.


defer and finally are two different things.

finally may not be the last thing the function does.


Doesn't Python 3.10 add pattern matching? [1]. Seems to make this much more palatable. Specifically look at [2].

[1] https://peps.python.org/pep-0636/ [2] https://peps.python.org/pep-0636/#adding-a-ui-matching-objec...


Yes it does! It's such a nice addition to the language. It does make error objects more palatable, but I still don't see the benefit of error objects over exceptions. You still lose stack traces, and a universal way to get error messages. You're still taking a risk that the caller will miss a case and not handle the error at all, thereby shrinking the "pit of success".

Also, FWIW, the code in question is 5 years old. It seems to be more of an idealogical question than a pragmatic one.


Exceptions come with program flow. Didn't need/ want that flow? Too bad, you get it anyway and must re-structure your program accordingly.

Suppose there are two consumers of my function F, which usually returns a Digit but sometimes instead has a problem. Should F throw ProblemException or should its return type be a Sum Type with both Digit and Problem ?

Alice uses F in some hot code that doesn't immediately care about the problem or the digits, while Bob is only using F in an edge case and needs the result immediately.

For Alice, needing to try F and handle the exceptions (which Alice doesn't ever care about immediately) is annoying. For Bob it works fine. But, Bob can also write code to handle the Problem next to his code for handling the Digit, we were forcing Alice to handle ProblemException, but Bob can choose where to do something with Problem as it suits him.


Alice isn't forced into using anything.

You appear to be forgetting that raising exceptions as part of program flow is standard and pythonic. It's OK to disagree with that, but as it's part of the language and part of the python modus operandi let's not go out of our way to illustrate our dislike.. Just use another language


> Alice isn't forced into using anything.

How so? Do you just mean, "Well, Alice needn't use Python" ?


Alice can use `with supress` to ignore the exception. That's just as much, if not less, extra control flow as forcing the caller to check the returned object's type.


Suppression destroys the problem, but Alice does want the problem, just not in the middle of this say, tight loop. With a sum type Alice can put problems, together with Digits, in a pile to be handled later.


Then Alice can catch the exceptions and put them into a list for later, if that's what she really wants. This feels like a pretty contrived example.

The general case is that `get_foo()` should return a Foo, or throw an exception if it can't. It shouldn't return a Foo, or a BarError, or a BazError.

The fact that you can tuck these completely different objects away for processing later without knowing what they are is not a desirable feature. That's just going to be a source of bugs.

In a language with a stronger type system, I see the benefit of option types and the like. But in Python, having a list of `my_foos` contain mostly Foos, but also some garbage, is not something you want.


> This feels like a pretty contrived example.

Well, it's certainly not popular in Python today. And it seems the reaction from most Python programmers is they'd just force Alice to take the hit, whereupon presumably Alice ceases using Python for this application, and that's maybe fine, it's not as though there aren't other languages.

But it seems like a shame because Python can do this the other way.

> In a language with a stronger type system, I see the benefit

My instinct is that the direction of Python is towards a stronger type system. I remember when Python's type hints were a very obscure feature few ever cared about and now it seems like every other Python story involves the type system and type annotation is strongly encouraged. But I could be wrong.


There are legitimate cases in Python where you want the equivalent of "On Error GoTo Next", which currently requires a blowup of one statement into four (try, f(), except, pass). Returning an error can simplify such cases.

Sure, ignoring exceptions can be (horrid) code smell, but in low-stakes contexts it's not necessarily bad code. You see this especially often in the context of one-off Jupyter notebooks that data scientists like writing, where quick iteration, flexibility, and ease of reading is more important than making robust, production-ready code.

Did my text scraper fail because of a network error or was it just some weird unicode character that broke it? Don't care. I just want some collection of simple english text to train my language model, and there's more than enough else on the internet for me to fuss about corner cases.


>which currently requires a blowup of one statement into four (try, f(), except, pass).

Or ...

  with suppress(Exception):
    flaky_thing()
... assuming modern Python.


That's nice, thanks for sharing that.


> With error objects, you're going to fall through a bunch of isinstance checks and continue on with a result object which isn't the type you think it is.

If you are doing isinstance checks (or pattern matching or something else equivalent) for separating error/success returns where, as in Python, you don't have exhaustiveness checks, your default case should be treating the result as an unknown error, not a success.

Also, this error can be avoided on the called-function side by using a common type (concrete or root) for errors. Using a wrapper for errors, as with a Result type (whether or not successes are also wrapped as they would be with a result) achieves this, for instance.


Yes, having a default case that throws an Unknown Error is the correct thing to do here. But that is still less ergonomic. For an unhandled exception type, the person creating the exception can attach a helpful error message. For an unhandled exception object, you can't do any better than "An Unexpected Error Occurred".

You can do a little better by having multiple error subclasses. But at that point you're literally reinventing exceptions, but with a greater onus of the caller to do the right thing. This seems to go against the "pit of success" idea from the article.

There's no way for the developer to accidentally miss an exception, or misunderstand the subtleties of the error inheritance structure.


In some API code, we adopted and use the heck of out the pattern of making Exception subclasses that mapped to HTTP error codes, like:

  class NotFoundError(ClientError):
      """Not Found."""
      status = 404
The top level error handler would catch exceptions raised like:

  raise NotFoundError(f"The requested object ID {object_id} doesn't exist.")
and turn them into consistently formatted error messages. The end result was that you could write very readable, understandable code like:

  def handle(request):
      validate_signature(request)
      check_authorization(request)
      person = fetch_from_database(request.object_id)
      return {"first_name": person.first_name}
The `validate_signature` method call may raise a 400 if the request is invalid. `check_authorization` could raise a 403 exception. `fetch_from_database` would raise a 404 if the row couldn't be fetched. In effect, each line of that function has an implicit early return: if it fails, processing stops there.

Compare and contrast with:

  def handle(request):
      if validate_signature(request):
          if check_authorization(request):
              person = fetch_from_database(request.object_id)
              if person:
                  return {"first_name": person.first_name}
              else:
                  return 404
          else:
              return 403
      else:
          return 400
Python's exceptions are idiomatic. Embrace them.


What about actually using early returns ? I feel like the implicitness of exceptions is more of a downside than an advantage

  def handle(request):
    if !validate_signature(request):
      return 400
    if !check_authorization(request):
      return 403
    person = fetch_from_database(request.object_id)
    if !person:
      return 404
    return { "first_name":person.first_name }


That's twice the code for the same result. Also, `validate_signature` itself might have lots of similar code, like

  def validate_signature(request):
    signature = sig_from_request(request)
    payload = payload_from_request(request)
    check_signature(payload, signature)
where each of those nested calls follows the same pattern.

Yeah, you can do all of that with passing around error objects. A lot of people writing Go do that every day, so obviously it works and it's not overwhelmingly tedious. I highly prefer the Python style of EAFP though.


> That's twice the code for the same result.

Only if you ignore that the exception-throwing solution _needs_ comments like `# this might throw a 404` to understand the control flow. The early-return solution is what I'd call self-documenting code.


No more than you'd need comments like `# this might actually return None`. As long as you're consistent, either approach should be understandable by the people working with it.

Note that you can also name functions like `validate_signature_or_raise` if you want to be very explicit.


The most ergonomic error handling is a Result<Value,Error> with some syntax sugar to get-value-or-return-early-error like rust's ? operator.


IOW the most ergonomic error handling is a monad, and I agree.


And exceptions can be considered a syntactic sugar on top of just such a monad. It's just that many believe it's too much sugar to be good for you, particularly the implicit rebind part.


Not exactly.

You may catch exceptions in between the computations, whereas with monads you have to unpack and match the error.

You can replicate the monad behaviour by catching the exceptions at the top level. I find that this approach is generally cleaner than intermediate exceptions but is more restrictive.


That's what I meant by "implicit rebind". But the fact that it's implicit for every expression doesn't make it any less of a monad.


I see, thanks for the clarification.


There's this language called zig, and there errors are represented as union values and have this kind of syntax to work with but way more explicit, really like the idea of how they do it.


Too much sugar can be bad, for flavor and health.


One thing I miss when the error object pattern is used in preference to exceptions is the stack trace.

With an exception, you get a detailed log of the full stack that led to the error. With an error object, I have to reverse engineer out all the places in the code that could potentially return that type of error. In a perfect world, you don't need to know that, and all the information you need is in the error object. But we don't live in a perfect world and very often the stack trace is the main lead you have to debug some crazy thing happening in production.


Yes i have hit this too.

My planned goto approach if I was on a prod app instead of a toy, is to use something like `result`, but with the `Err` variant modded to generate an internal traceback whenever it's constructed.

With that said i haven't ever done the above - anyone done something similar? Or approached this a different way?


Given enough digging around in the python runtime system I'm sure you can find the current stack trace and shove that in an error class.

But now it's a lesser error than an exception. Any exception while processing your error will cause the stack to unwind and drop the error variable with the stack trace inside of it.

Now if you raise an exception while in the handling of an exception, python will keep around both stack traces.


I personally like requests api.

Requests gives you a response, but let's you raise an exception if an error case was caught (404, 500, etc)

And it's only one type (Response) and doesn't force the user into using isinstance to figure out if it succeeded or not.


Yessss... I'm a big advocate of using error objects rather than throwing exceptions. I think the industry over-uses throwing exceptions and it's basically turned into the new GOTO statement.

I wrote an article about this as well but for TypeScript [0] and I've got a relatively popular npm package that does what I mention in the article [1]

---

[0] - https://gdelgado.ca/type-safe-error-handling-in-typescript.h...

[1] - https://www.npmjs.com/package/neverthrow


I'm not surprised. TS exceptions feel limiting and awkward and I don't like using them, compared with Python's. Add to that, TS's powerful typing makes it easier to precisely type return values compared to Python's annotations, but still you cannot type what function throws in TS.

In Go, I tend to use the return error pattern. In Python, I raise and try. With TS, I do both (but slightly more confident with returning errors).

IMO, rule of thumb: try not to surprise the developer after you, who most likely will be hired for knowing this language not this project (note: that developer may be future yourself). When in Rome...


... do as the Roman's do! Agreed :)


> I think the industry over-uses throwing exceptions and it's basically turned into the new GOTO

This is a misunderstanding of what's bad about GOTO versus jumping to a label. eg Functions are labels you GOTO. The difference between a GOTO instruction and a Function call, for the purposes of differentiating the methodology of code routing, is that a Function is part of a stack that returns in a serial fashion. ie Each element of the stack returns to the parent before disappearing, which can be though of as "unwinding". An exception unwinds a Function stack without restriction (eg no consideration of signatures like args/returns), until it is caught in that stack, making them conceptually and practically different from GOTOs.


It really depends on the language. In Python, exceptions are idiomatic even for non-fatal failures, the runtime is optimized for that performance-wise, and it's what API clients expect.


Interesting, I've seen the same discussion but applied to OCaml and with an option type vs an exception. For example, the module List has both List.nth that raises Failure if the list is too short, and List.nth_opt that returns None of the list is too short. One aspect that I've seen mentioned about OCaml and not with this article is performance. An option will trigger an allocation, and will generay be slower than an exception. One the other hand, the option approach is safer. I'm not sure if that's true with Python too though.


Performance cost is a legitimate concern but one that should be considered in relative terms.

e.g. yes there are more allocations, but in what context is the code being used? Are we building gigabyte-scale real-time systems? no? ok then let's maybe not worry about performance right now and instead optimize for intuition, maintainability and readability. Obviously while at the same time monitoring CPU and memory consumption of our applications to help us determine when we might want to look to optimize our code.


> then let's maybe not worry about performance right now and instead optimize for intuition, maintainability and readability.

I would argue that throwing exceptions is the intuitive and readable thing for Python. Having to call isinstance for all possible return types of a function is not the "pythonic" way.


That's interesting - coming from C++, exceptions are much slower than returning -1 on error. I suppose this is because C++ doesn't need to allocate a Option.


Throwing an exception is much slower, but that only happens on error. With option types, you still have to create and return an instance even if successful, and if that causes a heap allocation, you're worse off than exceptions if the success case is much more likely.

That said, it's really an OCaml implementation deficiency - std::optional in C++ is not heap-allocated, for example (although there's still some overhead due to sizeof being larger & possibly not fitting in a single register).

On the other hand, in (C)Python, exceptions behave more or less like in C++ semantically, but under the hood the VM implements it all as error return codes and thread-local "current exception" pointer, which is closer to std::option perf-wise than it is to real exceptions. Hence why performance is rarely a consideration when deciding whether to throw or return in Python, and why exceptions are so idiomatic there.



One place where you are pretty much forced to use error objects is queue-based concurrency.

In my case, a seperate thread listened for network events, parsed them and put the results on a queue for the main thread to consume. For parsing errors, I needed to put error objects onto the queue instead of terminating the whole thread with an exception.

In this sense, error objects generalize much better to cases where you don't have an usual function-call structure.


++, i normally use the `result` lib to wrap good/error return values, but your approach without it honestly doesn't lose that much.

I would still use `result` for bigger projects or if I was dubious that consumer would do the right thing though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: