Change of basis in Linear Algebra 44 points by kilimchoi on July 23, 2015 | hide | past | favorite | 6 comments

 The TLDR intuition of a change of basis is this: you have a vector of length L that points from the origin up towards the z-axis. If you'd like to rotate your perpendicular measuring sticks like x->x', y->y' and z->z' (ie, rotate your coordinate system, maintaining perpendicular-ness and vector length), then you've changed your basis. Say you rotate your coordinate system 90 degrees about the y-axis, now your vector points in the new x-direction (so x-> -z, y->y and z->x, and your vector is (1,0,0) instead of (0,0,1)).You can do that with a matrix operation (M*v = v'):[0 0 1]___[0__ __[1[0 1 0]___0__=__0[-1 0 0]__1]__ __0]In fact, this matrix rotates any vector 90 degrees about the y-axis.Basis vectors can also be more abstract than that. For instance, they're useful in quantum mechanics for simplifying the schroedinger eq (sometimes from a second order diff eq to a first order one) by changing from a position basis to a momentum basis, in effect rewriting your derivatives from a different point of view.
 > rotate your coordinate system, maintaining perpendicular-ness and vector lengthNot necessary for every change of basis.
 Yes, it has to be a normalized orthogonal basis to maintain those properties :)
 I've always particularly liked the bra-ket notation for making this sort of problem intuitive, typically used by quantum physicists. (There's a wikipedia article on this notation, though I don't think it does it justice.)In that notation, a vector |a> which is basis independent has components in some basis |i> , where (say) |i_1> , |i_2> are the basis vectors. Then to change to another basis = sum over i , which turns into the matrix mechanics.(Each of the outer products e.g. |1><1| is a "projection operator" which projects a vector onto that basis vector. The sum of all of them projects onto the whole space spanned by the vectors, which is the same as doing nothing, which is therefore the identity operator.)Once you get your head around the connections between coordinates (i.e. a_x = a_1) and the dot product with a basis vector (i.e. dot(i_1, a) = = <1|a>), this notation can make the whole thing intuitive and mechanical.I have an explanation of this online at http://cs.marlboro.edu/talks/bra_ket.pdf .
 Given the frequency (pun intended) with which Fourier transforms and series come up here at HN it might be worth noting that they are essentially a change of basis for the vector space of functions. Think of functions as infinitely long vectors with each argument defining a component.
 Nice to see a clear explanation of this. Always hated trying to learn and re-learn it during my quantum days.

Applications are open for YC Winter 2022

Search: