
Power Method for Approximating Eigenvalues [pdf] - kercker
http://ergodic.ugr.es/cphys/LECCIONES/FORTRAN/power_method.pdf
======
hervature
The convergence analysis is a bit lacking, but there is a significant sped up
when you store the last power of A and you keep multiplying by that instead of
A. That is: A, A^2, A^4, A^8, ...

It makes the second case they give go from 60 iterations down to 7.

~~~
jules
Multiplying a matrix by a matrix is a lot more expensive than a vector by a
matrix.

~~~
nimish
Yeah and if you're gonna go matrix matrix you might as well do the QR method

------
acidburnNSA
Fun fact, the power method is what all neutronics codes that simulate neutron
distributions in nuclear reactors use. The diffusion/transport equation in a
multiplying medium is a eigenvalue equation and the dominant eigenvalue is the
inverse of k, the multiplication factor

~~~
acidburnNSA
EDIT: Explained in detail in another comment:
[https://news.ycombinator.com/item?id=16017949](https://news.ycombinator.com/item?id=16017949)

------
chestervonwinch
Is there a relationship between the power method and some "standard"
optimization algorithm (grad. descent, Newton's, ...) applied to maximization
of the Rayleigh quotient?

~~~
nimish
Conjugate gradient (And other krylov subspace iterations) are basically a
generalization of power iterations where you consider the whole set of {b, Ab,
A^2b...} rather than just the one.

~~~
selimthegrim
MATLAB, for one, in its eigs function uses Arnoldi with restart (picking a
arbitrary seed to zoom on "good" parts of the Krylov subspace). Imagine we
pick an Arnoldi vector for restarting (a member of that orthonormal basis
which spans that subspace) which is equivalent to some polynomial expression
of the initial matrix times our initial vector. We essentially want to pick
this vector using such a polynomial (not the characteristic) so that the
values of this polynomial function peak as a function of the true eigenvalues
of the matrix - this lets us pick out the Krylov space that better
approximates the correct eigenvector and hence the correct eigenvalue.

------
gcr
Once you know the dominant eigenvector, I recall there was some trick you
could do to get the second-dominant eigenvector, by projecting the dominant
one out somehow. How can you repeat power iteration to get all the
eigenvectors of a matrix?

~~~
qwerty1793
The largest eigenvalue of (A - k)^{-1} is 1 / (x - k) where x is the smallest
eigenvalue of A that is larger than k. So by using this technique on (A -
k)^{-1} for various values of k you can find all of the eigenvalues of A.

~~~
acidburnNSA
In practice this is numerically unstable and it's generally better to use
methods that use a larger and orthogonal subspace (i.e. Krylov methods) to get
multiple eigenvalue/vectors. In nuclear reactor problems you can maybe get
about 5 eigenvalues by filtering the larger ones but after that it gets noisy
fast. I've gotten up to 1000 good eigenvalues from a large neutron diffusion
problem using Arnoldi.

------
xchip
What are you all computing eigenvalues for? I am curious.

~~~
noelwelsh
PageRank is computing the principal eigenvector of the transition matrix
describing the connectivity of the web. PageRank is no longer, AFAIK, the
primary signal that Google uses but it's an idea that was worth $Billion at
least.

~~~
hyperbovine
$25b, in fact. [https://www.rose-
hulman.edu/~bryan/googleFinalVersionFixed.p...](https://www.rose-
hulman.edu/~bryan/googleFinalVersionFixed.pdf)

------
tilt_error
What textbook is this excerpt from?

~~~
devxpy
Erwin Kreyszig, Advanced Engineering Mathematics

~~~
dajohnson89
yep I recognize it. that's an awesome book.

