This is a nice introduction, but it looks there is a problem with the formatting: some of the math is not rendered properly (in the source, some expressions are wrapped in dollar-signs rather than script tags).
I hate to be that guy, but "Raphson", not "Rhapson" (http://en.wikipedia.org/wiki/Joseph_Raphson). It's also worth noting that no one ever actually forms the inverse of H, even if H is dense. At worst you would compute some factorization of H and use that to solve for the update d.
Is it typical to interface to Newton's method via a function that computes the inverse Hessian? I've never seen that. Typically, people claim it is numerically unstable to explicitly invert the Hessian, and that it would be better for the interface to take the Hessian itself, and then call a subroutine to do a linear solve.
In practice you only need to do $H^{-1} g$; L-BFGS stores the {s_k} and {y_k} vectors which allow you to do the $H^{-1} g$ directly rather than needing to ever form the hessian or its inverse. There are techniques that aren't BFGS-based which approximate the hessian rather than the inverse and in that case you'd be better off solving.
What I mean is that, if just doing regular Newton, I'm not sure I've ever seen an example using the interface you show. I think that the vast majority of Newton's method implementations pass H rather than H^-1, and get the search direction via "solve_H_g(H,g)" rather than "solve_iH_g(iH,g)". It takes cubic time to compute H^-1 from H (after which you can compute (H^-1)g) and it takes cubic time to solve for x such that H*x=g, but the rather is said to be more stable, and so preferred.
I know this is irrelevant to the main part of the post, which is to explain LBFGS, I'm just genuinely interested if there are applications where its better to pass the inverse of H rather than H itself for some reason.