
You Really Should Log Client-Side Errors - latch
http://openmymind.net/2012/4/4/You-Really-Should-Log-Client-Side-Error/
======
rakeshpai
I'm the developer of Errorception (<http://errorception.com/>), and wanted to
jump in to talk about some of the points raised on the thread here.

As chrisacky mentioned, making one HTTP post for every error is very wasteful.
It's much better to buffer up such errors (Errorception does it in memory,
since localStorage can be unavailable and/or full), and post them every once
in a while.

As masklinn pointed out, window.onerror _is_ complete shit, so Errorception
does a couple of tricks to make this slightly better. Firstly, on IE, the
stack isn't lost when window.onerror is called, so it's possible to get the
call stack and arguments at each level of the stack. Secondly, it's very easy
to get other kind of details (browser, version, page, etc.), which helps a
great deal in aiding debugging.

However, masklinn's suggestion about wrapping code in try/catch blocks is
probably not a good idea. This is because some interpreters (I know v8 does
this) don't compile the code in such code-paths. May cause a performance hit.

As DavidPP mentioned, depending on the nature of your application, it might be
a good idea to not record too much sensitive information. For example,
Errorception doesn't record function arguments if the page is served over SSL.

troels is right - this does create a massive flood of errors. There are
several ways to deal with this. What we do for example, is hide away errors
from most third-parties - Facebook, Twitter, GoogleBot, GoogleAnalytics, etc.
The rest of the errors can still be huge in number, so we group similar errors
together based on several parameters like browsers, browser versions, possible
variation in inline code line-numbers because of dynamic content, etc.

Also, as Kartificial pointed out, this is probably something you don't want to
do on your own server. You want to move this out of your infrastructure, and
distribute it if possible.

There are other concerns - some that come to mind are page load time, ensuring
that your error catching code itself doesn't raise errors (or if it does, it
doesn't affect your app in any way), and that of managing data-growth on the
server. These are fun problems, but it's probably not worth re-inventing the
wheel.

</plug>

~~~
ef4
The V8 performance issue with try/catch is widely misunderstood, and it causes
people to unnecessarily avoid using it.

try/catch can slow down code _in the same function as the try/catch
statements_. But it doesn't slow down any other function called from within
there. In other words, it only affects the stack frame at which the try/catch
statement actually appears.

So if you have some top level error trapping function like:

    
    
        errorCatcher = function(continuation){
          try {
            continuation();
          catch(err){
            reportErrorToServer(err);
          }
        }
    

and you only call it once at every entry point to your code, the impact will
be totally undetectable, because there's only a single function invocation
within the unoptimizable area.

I encourage people to performance test the difference caused by try/catch like
this. You'll see that it doesn't hurt at all. It's only a problem if you
actually write a try statement within some tight inner-loop function.

~~~
gruseom
(Rewritten to report results)

I tried the following 4 variations in Node.js 0.6.13, V8 3.6.6.24. (The
"global" business is for accessing the functions from a REPL.)

    
    
      var lots = 1000000;
      
      // A: try/catch inside loop
      global.a = function () {
          for (var i=0; i<lots; i++) {
              try { } catch (e) { }
          }
      }
      
      // B: one try/catch, inlined loop
      global.b = function () {
          try {
              for (var i=0; i<lots; i++) { }
          } catch (e) { }
      }
      
      // C: one try/catch, loop in local function
      global.c = function () {
          var loop = function () {
              for (var i=0; i<lots; i++) { }
          }
          try { loop(); } catch (e) { }
      }
      
      // D: one try/catch, loop in top-level function
      function loop () {
          for (var i=0; i<lots; i++) { }
      }
      global.d = function () {
          try { loop(); } catch (e) { }
      }
      
      // E: no try/catch
      global.e = function () { loop(); }
      
      global.time = function (times, fn) {
          var start = new Date().getTime();
          for (var i=0; i<times; i += 1) { fn(); }
          return (new Date().getTime() - start) / times;
      };
    

In the REPL:

    
    
      > ({ a : time(100,a), b : time(100,b), c : time(100,c), d : time(100,d), e : time(100,e) });
      { a 7.66 b 5.29 c 5.06 d 1.45 e 1.32 }
    

So a million try/catches (A) is bad. But hoisting the try outside the loop (B)
or putting the loop into a local function (C) doesn't do nearly as well as
putting the loop in an outside function (D), which is almost as fast as no
try/catch at all (E). I tried a few variations (like making the loop actually
do some work) and the outcomes were all comparable.

This clarifies and confirms what ef4 said: don't put cpu-intensive code inside
the same function as a try/catch. Put it somewhere else and call it. Also,
local functions don't necessarily count as "somewhere else".

~~~
spicyj
My understanding is that each time the try is "executed" makes a performance
hit.

~~~
gruseom
(I've rewritten the GP)

That appears to be wrong, since B, C, and D all execute the same number of
try/catches.

------
masklinn
If you do that, don't be surprised when window.onerror seems to be complete
shit and not provide a fraction of the information you need: window.onerror
_is_ complete shit and _does not_ provide a fraction of the information you
need.

If an exception is thrown anywhere, window.onerror will _only_ get you the
exception message, file and line (the latter two being utterly useless in
minimized code of course).

If you can, wrap all behaviors (meaning all event callback as well) in a
try/catch, so you can get the actual error object and hopefully extract useful
informations from it (or the environment, this pattern also allows you to log
and report the initial state which ultimately lead to the fault).

~~~
glenntzke
I thought the same thing. I'd expect the developer who is thoughtful to catch
client-side errors would also deploy their code minified and that JS errors
are infamously unidentifiable by message alone (unexpected reference to
undefined, anyone?).

I think this could be helpful given some savvy javascript written expressly to
be parsed on error - but the overhead that may bring on doesn't seem worth it.
And with enough users you may have a deluge of data to sift through.

------
chrisacky
Theory correct. Implementation wrong.

If I was actually doing this and wanted to run it on a production site, I
would certainly not fire off a post on each error. The most likely solution
that I would use off the top of my head would be to store the data in
localStorage of the current user where supported.

You wouldn't need to collect _every_ _single_ _error_ , but just a sample. You
shouldn't really need to be that concerned with the totally extreme edge
cases, and if you are triaging correctly then it's the most common errors that
you want to look out for.

So I would implement something that would result in a random selection of my
visitors from collecting the errors for me and also have a buffer and client
side rate limit included into the application so that errors are saved up, and
sent in bulk.

As I start collecting errors which I realise are totally beyond my control, I
would then start dropping errors generated from particular files and
extensions installed and stop logging them.

Logging _everything_ is wrong in my opinion.

Edit:

Yes I would do the _bulk_ of deciding what to log on the server. However, I
think I would still have some standard rules that I know I can apply to
disregard certain error events. But since I would constantly be working over
the logs, the majority would be done server side.

~~~
alexchamberlain
Where would the decision to drop an error be made?

I would do it on the server, where an intelligent decision can be made.
(Increment a counter if it's a duplicate? Group errors together etc?)

~~~
kamjam
I would probably catch errors as you would on server side logging, log them
according to severity and have the ability to turn on/off logging of certain
types on errors via config (eg. log.debug, log.warning, log.fatal etc).

Although I've never had client errors logged to the server, I have had it
logged in a hidden "console" (uses Firebug if available, otherwise custom
logger). This gave us enough info to debug the error and we could tell users
to open the console and copy paste the trace (was accessible via key combo
Ctrl+`).

------
troels
I tried something similar recently and got flooded with errors. The problem is
that Javascript errors happens for all sorts of reasons outside your control.
Affiliate banners, Facebook like buttons, strange browser toolbars, to name a
few. And then you have all the errors that happen when someone navigates away
from a page, before it has completely loaded.

That is not to say that this isn't useful, but you need to be able to cope
with the noise.

------
DavidPP
This can be touchy if you handle sensitive information in JavaScript, but if
not, you can also send those errors has events in Google Analytics saving you
the trouble to develop a back-end for it.

This way, you get the number of errors, on which page, for which browser, etc.

~~~
dclaysmith
I like this idea a lot. Quick search on the subject returned this article:

[http://uxdesign.smashingmagazine.com/2011/10/04/improve-
the-...](http://uxdesign.smashingmagazine.com/2011/10/04/improve-the-user-
experience-by-tracking-errors/)

Seems like a great way to handle error tracking without creating more
overhead.

~~~
dazbradbury
The linked article is more about capturing user errors. In other words,
capturing events in your analytics when someone enters an incorrect password.
I agree, this could be very insightful information.

However, it wouldn't take much to apply the same logic to what the OP is
suggesting. Routing all javascript errors to GA. Would take the load off your
servers, and already provide a front end for you to browse.

Thanks for the suggestion, I hadn't thought to use GA, and it seems like
something I may implement. Hopefully it will garner some useful insights, and
if not, no harm done!

------
dmethvin
Rather than using $.ajax, I've used a simple image request with URL parameters
for sending back error info. That has the benefit of allowing cross-domain
transfers. Be sure to save the user-agent info as well so you can correlate
errors with browsers.

~~~
masklinn
That can be quite limiting, especially if you're trying to support MSIE as it
has a hard cutoff at ~1000 characters for the URL (so you're limiting yourself
to below 1k of data, which may not even be enough for a slightly deep stack
trace).

------
jreposa
Airbrake (formerly known as Hoptoad) has a JavaScript notifier. We use it for
server side logging, but this article has inspired me to add it the client
side as well.

[http://help.airbrake.io/kb/troubleshooting-2/javascript-
noti...](http://help.airbrake.io/kb/troubleshooting-2/javascript-notifier)

Can't believe I wasn't doing this already. Thanks!

~~~
driverdan
Airbrake's JS logging is shit. It catches so little data that it becomes an
annoyance. You'll see things like "Script Error" at file 0.

The only time it's useful is for typos and things so basic that you'll find
them when you do testing.

~~~
jreposa
Good to know. I'm running a test right now, so I should have a good idea if I
need to pull it in the next hour or so.

------
davideweaver
There are definitely a lot of issues to overcome when logging errors from a
javascript client (as pointed out in the comments to this post). Regardless,
the information you can get from it can be really helpful and you should be
doing it.

One of the problems we find with js errors is that they don't contain a lot of
information by themselves, especially when you're just trapping
window.onerror() with minified source. We like to augment the client-side
errors with server-side errors/activity as well. If your logging solution
supports being able to track errors and activity by a session (username, ip
address, etc) you will have a much better time tracking down what caused a
specific js error. Putting all the pieces together gives a much better
picture.

Try a logging solution that enables you to track more. I run loggr.net, so I
am partial to that, but there are other good solutions out there like
newrelic.com, loggly.com.

------
Kartificial
Solid approach if you really want to dig in problems occurring client-side in
projects that are still in Beta.

I'm not sure whether or not you would want this in a production page.

~~~
snprbob86
We do this on a product page, but with some modifications. As luck would have
it, I recently made a Gist about this :-)

<https://gist.github.com/2210783>

Our approach integrates with a syslog-style logger. The logger preserves a
ring buffer of messages, to help with debugging, since minified stack traces
aren't super useful.

To mitigate the risk of a run-away infinite loop alert monster, alerts are
intelligently rate limited client side. We also do server side filtering, so
that known/expected errors are ignored. For example, we don't really care
about Chrome extension-related failures, although I guess we might at some
threshold for popular extensions.

It's relatively noisy, but so far it has been indispensable for debugging
errors in production.

~~~
dunham
We've been doing this for a few years in our GWT app. We also keep a ring
buffer of log events.

We don't have much noise, but we're using GWT's "unhandled exception" hook
which I believe is driven by try/catch blocks on all entry points into the
app. (i.e. event handlers - I think they just wrap callbacks before hooking
them up.) So we only get exceptions that occur in our own code.

For us, the stacktraces are actually very useful (on the few browsers that
provide traces). We dump a translation table from GWT's minification process
and use it on the server side to de-obfuscate the stack traces. It works on
all of the function/method names, but not local variable names.

The new javascript "source maps" technology should provide enough information
to do this with any minifier or compiler. I'd expect your favorite javascript
minifier to start outputting source maps in the next few months.

------
PaulHoule
I was building a Microsoft Silverlight system and built something like this
for it. There you have exceptions that are just like Java exceptions so I'd
bottle them up and send them back to the server.

The system also kept track of the context in which async comm was running so I
could link up events on the client to events on the server and vice versa.

------
mikeflynn
We've been logging client-side for a little while now, and while we have tons
of log files filled with Facebook like buttons issues, it has been very
helpful a number of times.

Of course, some browsers are of course less helpful than others (IE loves
"line 0" errors).

~~~
dmethvin
Every browser has its quirks on errors. At least IE will give you stack info
from window.onerror even without a try/catch, although they didn't get the
file name right until IE8. The "line 0" errors are often due to eval scripts
where there is no source file, it could even be something like a bad JSON
string being passed to a JSON.parse shim.

------
floodfx
We do something similar at Bizo. Here's a quick write up of an Amazon Web
Services-centric approach:

[http://dev.bizo.com/2012/04/capturing-client-side-js-
errors-...](http://dev.bizo.com/2012/04/capturing-client-side-js-errors-on-
aws.html)

------
dclaysmith
I played around with client-side logging using Google Analytics last night.
Definitely has potential...

<http://news.ycombinator.com/item?id=3802180>

------
stdclass
You could also use <http://MonitrApp.com>

It has also an integrated task management tool :)

------
lucian1900
I think all those requests might become a problem. It would be relatively easy
to buffer such logs, though.

Nice approach.

~~~
masklinn
> I think all those requests might become a problem.

Well it depends, the request is "fire and forget" so it's really cheap, you
don't care about it coming back (or even about it generating an error — though
you definitely need to handle that if you hook into ajaxError[0] or you might
get recursive reporting errors). And you could expect those errors to be rare
enough that the "buffer" would fill slowly and the odd error push wouldn't be
a big strain.

[0] hooking into ajaxError is pretty dangerous really, if the user's
connection goes down for instance the user machine is instantly going to loop
at high speed.

~~~
lucian1900
I meant expensive server-side. I would even like to run something like this in
production.

