
Change of basis in Linear Algebra - kilimchoi
http://eli.thegreenplace.net/2015/change-of-basis-in-linear-algebra/
======
cygnus_a
The TLDR intuition of a change of basis is this: you have a vector of length L
that points from the origin up towards the z-axis. If you'd like to rotate
your perpendicular measuring sticks like x->x', y->y' and z->z' (ie, rotate
your coordinate system, maintaining perpendicular-ness and vector length),
then you've changed your basis. Say you rotate your coordinate system 90
degrees about the y-axis, now your vector points in the new x-direction (so
x-> -z, y->y and z->x, and your vector is (1,0,0) instead of (0,0,1)).

You can do that with a matrix operation (M*v = v'):

[0 0 1]___[0__ __[1

[0 1 0]___0__=__0

[-1 0 0]__1]__ __0]

In fact, this matrix rotates any vector 90 degrees about the y-axis.

Basis vectors can also be more abstract than that. For instance, they're
useful in quantum mechanics for simplifying the schroedinger eq (sometimes
from a second order diff eq to a first order one) by changing from a position
basis to a momentum basis, in effect rewriting your derivatives from a
different point of view.

~~~
j2kun
> rotate your coordinate system, maintaining perpendicular-ness and vector
> length

Not necessary for every change of basis.

~~~
cygnus_a
Yes, it has to be a normalized orthogonal basis to maintain those properties
:)

------
jimmahoney
I've always particularly liked the bra-ket notation for making this sort of
problem intuitive, typically used by quantum physicists. (There's a wikipedia
article on this notation, though I don't think it does it justice.)

In that notation, a vector |a> which is basis independent has components <i|a>
in some basis |i> , where (say) |i_1> , |i_2> are the basis vectors. Then to
change to another basis <k| the "operator one" (which is the outer product of
a basis, i.e. sum over i of |i><i|) is inserted to get <k|a> = sum over i
<k|i><i|a> , which turns into the matrix mechanics.

(Each of the outer products e.g. |1><1| is a "projection operator" which
projects a vector onto that basis vector. The sum of all of them projects onto
the whole space spanned by the vectors, which is the same as doing nothing,
which is therefore the identity operator.)

Once you get your head around the connections between coordinates (i.e. a_x =
a_1) and the dot product with a basis vector (i.e. dot(i_1, a) = <i_1 | a> =
<1|a>), this notation can make the whole thing intuitive and mechanical.

I have an explanation of this online at
[http://cs.marlboro.edu/talks/bra_ket.pdf](http://cs.marlboro.edu/talks/bra_ket.pdf)
.

------
srean
Given the frequency (pun intended) with which Fourier transforms and series
come up here at HN it might be worth noting that they are essentially a change
of basis for the vector space of functions. Think of functions as infinitely
long vectors with each argument defining a component.

------
dtft
Nice to see a clear explanation of this. Always hated trying to learn and re-
learn it during my quantum days.

