Note: as a progressive optimization, the very same solution can also be extended to full pages, ie not just "body" parts.
In that case, the main benefit over a "regular" load is that javascript files that you use don't get to be evaluated over and over, ie you load them once for all.
I've been trying to achieve this same effect (without this library of course...) I had links pointing to the 'full page' but used jquery to override the click event of the link so that they used ajax to download only a part of the page into a container. Which container, is determined by the javascript, which I carefully write to avoid confusing myself. Which body part to download was determined by the server. I did not have to duplicate my html; The body part was a template by it self, and included by a master template for the full page. When a body part was downloaded, some jquery javascript needed to be re-run. (e.g. $(x).accordion()) Depending on the url, either the full page or just the body part was served.
On incompatible browsers the javascript functionality did not activate and clicking on the links led to the full versions of the page.
I have a similar library here http://bit.ly/fTj2Ls that uses pushState to provide a routing layer so you can handle perform ajax (or anything else you want to do) when a link is clicked. The best bit is that if JavaScript, or pushState, isn't available the links can still point to real pages on your server
You mean doing Ajax without changing the hash and all that the related crufts?
I guess that's because most browsers used by people out there today do not have support for direct url path changing without causing a reload. It's only been possible since the introduction of the HTML5 History API, which solved that very problem. Unfortunately, it's only supported by the newer browsers.
In that case, the main benefit over a "regular" load is that javascript files that you use don't get to be evaluated over and over, ie you load them once for all.
On my machine, this alone saves 300ms per page.