Hacker News new | past | comments | ask | show | jobs | submit login
JS performance: try / catch versus checking for undefined (jsperf.com)
43 points by ianox on May 3, 2012 | hide | past | favorite | 26 comments

And? This isn't very surprising. Exceptions are slower than regular/shortjump control-flow.

In many cases, the non-exception path is as fast or faster than if/then/else. This is because compilers and interpreters tend to be optimized for non-exception paths.

If you actually throw and catch an exception, it might be slower but since most of the time there are no exceptions, the overall result is faster with exceptions. This is because there are no branches in control flow that would slow things down.

The micro benchmark in the original article is meaningless. This should be measured in a real-world situation where the catch is 5-10 levels up from the throw, and there are a lot more throws than catches.

Look at the latest revision: http://jsperf.com/try-catch-error-perf/16

Your "not surprising" result is only valid in Chrome and Opera. Other browsers are faster at try/catch.

The latest revision is a completely different test. As klodolph explains in another comment, the presumably common case of no exception is generally how try/catch is optimized. So it stands to reason that the exception case would be slower and that there is potential for the non-exceptional case to be faster.

That test is broken. In JavaScript the value of an undefined property will be `undefined`, not `null` so the second test is doing twice the work of the first.

It's a good reminder though, because sometimes people get ideas in their heads about using exceptions for something other than, well, exceptions, and this is usually a bad idea.

Coming from the Python ecosystem, I am utterly horrified by those results. In fact, I thought the graphs were broken until I got to IE10.

Yes, but remember that Python exceptions are cheap not because they are cheap, but because you are paying for them on every statement anyhow, so you might as well use them.

While perhaps not so extreme, I have found similar behaviour in C++. I worked on a code base where there was a really nice way of writing an algorithm which would 'throw' around 100,000 times/sec, and we had to change to something like this (keep checking return values of functions, and return if they are true), to get reasonable performance.

Most C++ ABIs use something called zero cost exceptions where the performance of the uncommon "throws" path is made much slower so that introducing a try/catch block has no performance penalty when no exception occurs. For details, see LLVM's docs on the matter: http://llvm.org/docs/ExceptionHandling.html

The C++ philosophy has always been that exceptions are for truly exceptional situations, not a tool for regular flow control. Consequently, practically all C++ implementations optimize heavily for the non-exceptional case - try/catch blocks are almost free as long as nothing gets thrown.

Worth noting that `undefined` is a variable, and not a keyword.

   undefined = 1;
   >>> 1
   null = 1;
   >>> ReferenceError: invalid assignment left-hand side
It looks like Chrome and Firefox no longer let you assign to window.undefined any, but I'm not sure is IE has the same restrictions. The correct way to check for undefined is via `typeof`.

Why is try/catch so slow?

On SpiderMonkey, a Javascript value is a 64-bit word. In most cases, 32 bits are set aside for the tag, and one possible tag is 'undefined value'. So checking to see if a value is undefined is basically a simple comparison, like checking if a pointer is NULL in C (except very slightly more work).

On the other hand, try/catch requires traversing some kind of structure to find the catch handler, and exceptions are often allocated somehow. This kind of action will need to inspect the stack, possibly look at the heap, and probably do many branches.

Plus, try/catch is not typically optimized for speed of catching exceptions. Exception handling is typically optimized so that the code path that doesn't throw an exception is almost as fast as it would be if the exception handling weren't there. You want people to use your exception handling so they can write more error-tolerant programs, but they might throw it out if it slows down their correct code. If you make sure you aren't slowing down the non-exception path, you may have to make tradeoffs that slow down the exception path.

The programming language shootout had a test of try/catch performance across languages. I remember Lisp was at the top, but not as if anyone cares -- try/catch performance isn't relevant to most programs.

OTOH, you may find it interesting that the reverse is true in Python. In Python, it is almost always faster to catch an exception rather than to check first. The way CPython checks for errors is by checking the return value of functions against NULL, after all, which is very fast. Checking ahead of time requires more Python code, and the Python code is going to be the slow part, at least on CPython.

That's true - how about repeating the test with valid code that DOESN'T throw an exception.

- Edit -

Just tried it, it's not even close (on FF4):

tryCatch with undef - ~55,000

ifCheck with undef - ~2,000,000

tryCatch with Object - ~100,000,000

ifCheck with Object - ~2,000,000

So if an error is not thrown the try/catch expression is WAY faster.

I suspect the technical reason is because it hasn't been aggressively optimised. The reason for that is probably that it's not widely used in the real world, or at least not in performance-critical code. Especially in very tight code like that, I don't think there's a fundamental reason why the compiler couldn't detect the single throw site and compile the code down to the same as the conditional. It gets a bit messier when cascading an exception down the call stack.

One reason try/catch is slow is that it involves unwinding multiple stack frames, cleaning up as you go, until you hit a catch block.

Try/catch is also a very complicated flow control mechanism (think about how try/catch/finally/return all mesh together) such that a lot of VMs simply don't even attempt to apply any of their fancy optimizations to any function that contains a try/catch.

At least for Chrome, all optimisation is disabled in functions containing a try/catch. It's probably similar for the other browsers. Try/catch combined with some of the more sneaky semantics of javascript is rather a nightmare to optimise, I can imagine. Even a simple looking value can actually be a huge function chain (via defineProperty, defineGetter, or valueOf).

UI Feedback: It would be cool if we could collapse all the Chrome 19.x.x together.

If you hit the 'major' filter above the graphs it'll bring them together.

I stand corrected! Thanks.

It doesn't matter. You should signal errors with exceptions, not by returning undefined no matter how much slower it is. After all, it's supposed to be an exceptional case.

Note IE10 results.

So... it IS fast.

It seems they are doing a good work.

Note that this is an old revision of the test.

"Old revision" doesn't really mean anything on jsPerf since anybody can create a new revision which will increment the counter. The main point of having them numbered is so that you have something stable to link to.

The latest revision (35, as of this comment) is basically equivalent from most JS engines' perspective, and shows pretty much identical results.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact