

Finding memory leaks - tilt
http://gent.ilcore.com/2011/08/finding-memory-leaks.html

======
iam
Unfortunately I'm not sure that anyone will bother fixing their sites until
everyone else starts doing it too.

It's too easy for the web developers to just point the memory leak blame to
the browser and none of the users will be the wiser.

If some day in the future there's an extremely easy way to figure out which
sites are using how much memory (each tab in a process?) then we could start
seeing some push back for JS leaks to be fixed.

~~~
lukesandberg
I think that that day is rapidly approaching or here already. Using chrome you
can inspect per tab memory usage and use that to identify leaks that may be
coming from a page (in fact that's what the author did originally). And in
firefox you can use about:memory to inspect per tab JS memory compartments.
Its still not always obvious who is to blame (browser or site), but it is
becoming very easy to spot this kind of behavior.

~~~
mbrubeck
For anyone trying this at home: per-compartment memory reporting is a new
feature of about:memory in Firefox 7. You'll need a build from the Aurora
channel to try it out right now: <http://firefox.com/channel>

------
yason
Back in the good old days, you would actually run out of memory. It kept you
on your toes unlike today when allocs rarely if never fail. Instead,
everything just collectively gets paged out to disk, then sluggish and
sloshed, and eventually inoperable. And at that point you really can't
pinpoint it down to some stupid web page because everything is equally paged
out by the time.

THe browser could turn memory hogging tabs to red. It could impose an absolute
or relative limit to each tab's memory usage. Or what used to be nice in the
old days, having each tab expose a function that causes it to minimize its
memory usage, i.e. ditch any runtime data that can be recomputed later. But
since webpages are programs these days, this interface ought to be negotiated
between the browser and creators of web pages. Not going to happen.

Since web pages are sort of programs, I like the Chrome way of putting them
into separate processes. Then let the operating system manage them: it's much
better suited for that than a browser. Using setrlimit(2) to limit the amount
of memory could be useful. Automatic killing and restarting of a web page upon
hitting these limits would give an incentive for the developers to keep their
pages small; or not keep them small but design them to just deal with these
mandatory resets.

~~~
1amzave
> _Back in the good old days, you would actually run out of memory. It kept
> you on your toes unlike today when allocs rarely if never fail. Instead,
> everything just collectively gets paged out to disk, then sluggish and
> sloshed, and eventually inoperable. And at that point you really can't
> pinpoint it down to some stupid web page because everything is equally paged
> out by the time._

There's always swapoff(8) if you want that behavior back...might want to tweak
your /proc/$PID/oom_adj (assuming Linux) to make sure the OOM-killer axes the
"right" process, though.

------
mohsen
this is the type of article I enjoy seeing on HN.

------
nefarioustim
This is a great article.

A good portion of my day to day work involves handling JS memory leaks, due to
the growing reliance on JS for behaviours, interactions, and application logic
(sometimes an over-reliance, IMHO). It's sad that most of the JavaScript
developers I interview have no real idea what causes or how to fix leaks. We
could really do with making the community more aware of this stuff!

~~~
leon_
Talking to JS programmers about memory management is like talking to
VisualBasic programmers about data alignment. Won't lead anywhere.

Fixing leaking websites should be done by the JS vm as webdevs are usually
pretty non technical and have no understanding of concepts like memory
management.

