I took the first route and retained none of the information, despite doing well in the course. Then I took computer graphics and cried all the way home.
This link seems like a happy middle ground.
He starts by explaining that linear algebra is about studying the properties of linear transformations over vector spaces, and then goes on to explain what this means. In many theoretical books it takes ages to get this message, and in some practical ones you get lost in a sea of matrices without even understanding what a matrix actually is. Furthermore all his proofs are very slick.
I think the turning point came when I realized that "abstract" linear algebra was actually much closer to physical reality than "applied" linear algebra. Superficially, it seems like the exact opposite state of affairs from computer engineering, where grids of numbers take you closer to "bare metal" and "higher level reasoning" takes you further away. In math, it's the grids of numbers that are further removed from "bare metal" and the abstract reasoning that you can count on when all else fails!
I didn't need to be convinced that I would eventually come to regret having an insecure foundation so much as I was mixed up about up vs down and trying to build a foundation in the middle of the sky.
Despite having studied a decent dose of math during my undergrad (and currently during my grad), I keep on finding awesome books like Axler, Hubbard, Halmos, Neuveu... that make revisiting math so much more illuminating. Everything clicks much more quickly and I never loose momentum now.
Some of these books I found online, so I'd definitely like to exchange more references, problems, tips & experiences.
I can't guarantee that everyone is going to have the same experience, but yeah, it's a good book.
It's not necessarily that Axler is that good. It's that it's just solidly good all throughout. It has the right focus. The goal of the book was very clearly to explain linear algebra, not to make money or get contract deals with college bookstores. And the proofs are very clearly written.
I'm sure there are other books out there which are just as good as Axler. But compared to the more popular linear algebra textbooks, at least at the time, it seemed like it was in a class of its own.
I got a D in linear algebra, my lowest grade in university and one of my lowest grades ever, but excelled in computer graphics, and even got permission to take a graduate level course in it for elective credit as an undergrad.
My recollection is that the problem sets in the linear algebra class seemed really abstract and vague, and I couldn't reason through it very well because I couldn't visualize what we learned, given the information from the class.
Computer graphics, however, let me visualize what we were covered, and suddenly everything made sense. I could see, literally, what the end result was supposed to be, and it just clicked.
Now that I think about it, it's kind of a shame I had the linear algebra class first. The CG experience would have helped me out quite a bit.
I have written a blog post that describes matrices as functions. This blog post is titled "Correcting common mathematical misconceptions" and walks through N dimensions, linear functions and linear algebra. After explaining what linear functions are, I mentioned nonlinear functions and how they often don't have a closed form solution and why mathematicians find bounds.
OH! Bloody hell, so that's why we write it like that!
>The eigenvector and eigenvalue represent the “axes” of the transformation.
>Consider spinning a globe: every location faces a new direction, except the poles.
>An “eigenvector” is an input that doesn’t change direction when it’s run through the matrix (it points “along the axis”). And although the direction doesn’t change, the size might. The eigenvalue is the amount the eigenvector is scaled up or down when going through the matrix.
Very nice! I'd had an algebraic (ahaha) understanding of eigenvectors/values before, but hadn't a geometric intuition. Thank you. Now I can neatly imagine why the eigenvector is orthogonal/perpendicular to the "direction" of the transformation.
That's a trivial example, of course.
Contrast with a symmetry group of a hexagon. There's no (non-degenerate) notion of a continuous transition from identity to rotate-by-pi.
A Lie _group_ is a group which is also a manifold, such that the group operations are smooth. (I don't know what it means for a set to be smooth.)
A Lie _algebra_ is a vector space together with a bilinear operation (the Lie bracket, written [a, b]) which also satisfies the Jacobi identity:
[a, [b, c]] + [c, [a, b]] + [b, [c, a]] = 0.
It's a group with a an infinite number of elements which is smooth, in the sense that there is a notion that elements can be close or far from each other, and locally around each element the group looks like R^n (for some n). Of course the group operation must play nice with this notion of "close". I.e. if elements A and B are close, after multiplying both of them by X, AX and BX can't be too far: the group operation does not "tear" the structure apart.
So in the example above of rotations on the plane, you can visualize the group as a circle where a point represents the rotation by that angle. Locally it looks like R, but it has a different global structure.
You add and multiply these according to the same rules you're already familiar with. So, for example, if you add 1 to ...999, the last digit of the output is 0, and you get a carry. Making the second to last digit 0, with another carry. And so on and so on, making the result ...0000 over all. Thus, ...999 acts like -1.
(If we were working in base two, this last example would be just like the "two's complement" you are perhaps familiar with from computer arithmetic!)
In fact, most discussion of p-adic numbers isn't done in base ten. Instead, people typically focus on p-adics in a prime base (hence the p). Why? Because in a prime base, you will find that every nonzero p-adic number has a multiplicative inverse, which is very convenient (while in a composite base, you will find that sometimes nonzero numbers multiply to zero). But there's nothing actually stopping you from making use of the notion for non-prime base, should you be interested in doing so; the notion is still perfectly coherent. (That having been said, another reason mathematicians focus on p-adics in prime bases is that the Chinese Remainder Theorem essentially allows one to reduce the study of all other bases to this case.)
There's a lot of beautiful further theory to explore here, but again, the basic idea is hopefully quite simple. Let me know if you have any questions and I'll be happy to try explaining further or more clearly.
Someone already asked about the p-adics:
Love the trend of indie publishing that has been exploding in the past 2 to 3 years. The biggest issue is that the marketing & design is often mediocre. Content wise, completely awesome though. I lost interest in books from larger publishers years ago.
Linear algebra itself is just a study of vector spaces and linear operators and transforms and such.
(We all know that causality only flows left-to-right.)