Eigenvectors and Eigenvalues (2015) 369 points by vimalvnair 9 months ago | hide | past | web | favorite | 86 comments

 3Blue1Brown has a good series on YouTube for building intuition in linear algebra:https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2x...In one of the last videos in the (relatively short) series, he discusses eigen-*:~'eigen-stuffs are straight-forward but only make sense if you have a solid visual understanding of the pre-requisites (linear transformations, determinants, linear systems of equations, change of basis, etc.). Confusion about eigen-stuffs usually has more to do with a shaky foundation than the eigen-things themselves'https://youtu.be/PFDu9oVAE-gAll of the videos in the series, including this later one on eigen-things, focus on animations to show what the number crunching is doing to the coordinate system.
 3Blue1Brown (Grant Sanderson) is really, really good. I follow a number of education channels on YouTube, and Grant blows them all out of the water for the kind of insights, new perspectives, and inspiration he provides. His animations are fantastically put together to clearly and unobtrusively illustrate the point he's making. I also really like his voice, soothing, clear and with enough intonation to avoid boredom, and perfect pace. I wish I had his linear algebra series back in college, I suspect I would have done much better.He's the creator I support the most on Patreon: https://www.patreon.com/3blue1brown
 I’ve taken 3 college level courses on linear algebra and only really got an intuition for eigenvectors/eigenvalues when watching his video series. Super great!
 I had problems with Principal Component Analysis; nice to see he's got one on that as well!http://setosa.io/ev/principal-component-analysis/(edit: ugh, contains an ad for MacKeeper)
 What, you don't want to remove your mac from trash?Incidentally, that particular grammatical quirk is one I've seen a number of times. Is it a thing for people coming from Russian? Or just general difficulty with remembering the order of the two subjects in two-subject phrases?
 >I follow a number of education channels on YouTubeAny recommendations? I would love to look into channels that you think are like 3Blue1Brown but in other subjects (natural sciences, history, art etc.).
 I just realized I had 80% of these subscribed already, then I looked at how many channels I subscribed too (700+)I like tinkering with things so I have some mechanical / electrical engineering / makers on my list as well.I listed my favorites of that list here* Tom Scott - Lots of interesting unique topics related to engineering, programming, and historyhttps://www.youtube.com/channel/UCBa659QWEk1AI4Tg--mrJ2A* Colin Furze - Lots of crazy welding projectshttps://www.youtube.com/channel/UCBa659QWEk1AI4Tg--mrJ2A* Mark rober - ex-nasa engineer, similar to SmarterEveryDayhttps://www.youtube.com/channel/UCY1kMZp36IQSyNx_9h4mpCg* Hacksmith - just interesting project build videos normally related to marvel comicshttps://www.youtube.com/user/MstrJames* Live overflow - web / binary hacking with great case studieshttps://www.youtube.com/channel/UClcE-kVhqyiHCcjYwcpfj9w* Wendover productions - Great documenatires on things like Airplane and shippping logisticshttps://www.youtube.com/wendoverproductions* Strange parts - ex-programmer who sources components and parts overseas
 700? Damn! And I have trouble tracking what to watch with my few dozen subscriptions!Also, something rant about YouTube's recommendation engine, that doesn't mesh with my subscriptions into account well enough, offering many videos from channels I don't subscribe to and not enough from those I do subscribe to. Does it work better for you with your firehose of subscriptions?
 Awesome but no Vsauce?
 Y'know, I hesitated. I admit I'm somewhat ambivalent about VSauce, because Michael's somewhat wandering style makes it closer to entertainment than the usually more focused topics the other channels on my list work with. That said, Vsauce is definitely interesting and great for fostering curiosity and an inquisitive mindset.Also his new channel DONG ("Do Online Now Guys"... wut?):https://www.youtube.com/channel/UClq42foiSgl7sSpLupnugGAAlso I should've mentioned Veritasium:
 seems that pbs has a new one on Noethers theorem: https://www.youtube.com/watch?v=04ERSb06dOg
 Thank you! Introduces me to a whole new universe out there!
 Just to throw in my own anecdote, I took linear algebra twice (in high school with no college credit, and in college) and I still couldn't ever remember afterwards what an eigenvector was until I watched that series. Now I'll probably never forget. He is an astonishing educator.
 It’s possible to understand eigen-* without having an understanding of determinants. That’s how they’re introduced in “Linear Algebra Done Right” - http://linear.axler.net/
 And they exist in situations where determinants are difficult or impossible to define! Infinite dimensional vector spaces can still have transformations with eigenvectors but you generally can't define a determinant coherently for them (certainly they might have infinitely many eigenvalues and if the determinant is the product, then you have convergence problems). A classic example is that the standard Gaussian distribution is an eigenvector of the Fourier transform.
 You just blew my mind with this comment. I'm a statistics undergrad student and I really need to up my math knowledge. Where is this fact from? Functional analysis?
 Yeah that would be a good course to learn it - functional analysis is basically linear algebra on infinite dimensional vector spaces. (Though that sells it a little short - moving to infinite dimensional vector spaces requires incorporating some topology too.)
 I just finished watching both his calculus series and linear algebra. I have to say 3blue1brown has made a mathematical masterpiece of a youtube seriesNeither of those subjects really clicked with me until I could visualize it in a 2D / 3D representation
 Whenever this kind of stuff comes up I feel like a bit of a fraud...I’ve written a bunch of scientific data analysis code. I have a science PhD. Written large image analysis pipelines that worked as well as the state of the art... been published etc.For the most part I’ve found basic math and heuristics to be good enough. Every so often I go relearn calculus. But honestly, none of this stuff ever seems to come in handy. Maybe it’s because most of what I encounter is novel datasets where there’s no established method?I reasonably regularly pick up new discrete methods, but the numerical stuff never seems super useful...I don’t know, just a confession I guess... it never comes up on interviews either for what it’s worth.
 For a large fraction of probability theory, you only need two main facts from linear algebra.First, linear transforms map spheres to ellipsoids. The axes of the ellipsoid are the eigenvectors.Second, linear transforms map (hyper) cubes to parallelpipeds. If you start with a unit cube, the volume of the parallelpiped is the determinant of the transform.That more or less covers covariances, PCA, and change of variables. Whenever I try to understand or re-derive a fact in probability, I almost always end up back at one or the other fact.They're also useful in multivariate calculus, which is really just stitched-together linear algebra.
 I think the first point is only true for symmetric matrices (which includes those that show up in multivariable calc). In general, the eigenvectors need not be orthogonal.
 Yep, you could well be right. The image of an ellipse under a linear transform is definitely an ellipse, but I'm not sure about the eigenvectors in the general case.The symmetric case is by far the most relevant for probability theory though.
 In general it's the eigenvectors of the positive-semidefinite (hence symmetric) part of the left polar decomposition.
 I use the 2nd point a lot for debugging 3d transforms. To expand upon it, for example in three dimensions the three axes are:(1, 0, 0)(0, 1, 0)(0, 0, 1)To find out where those axes are after a 3x3 matrix transform, you just read off the first, second, and third columns of the matrix respectively. Then you can mentally visualize another unit cube in the new coordinate system using those three vectors as the edges of the cube.Really basic change-of-basis stuff but academic lectures don't emphasize how useful it is to be able to look at a matrix of numbers immediately know what it does.
 This concept totally changed my intuitive understanding of matrices. Beautifully illustrated in the below 3blue1brown video.
 Not a very useful addition but hypercube is to cube as parallelotope is to parallelepiped.
 Eigenvectors and Eigenvalues show up everywhere, although sometimes it's in the form of an iterative estimate (PageRank is basically the power method estimation of the first eigenvector of a connected graph of web pages).They're in the same class as logarithms and Fourier transforms IMHO. You won't need to calculate them by hand, but you should know what they do and why they're important.
 I use logarithms and FFTs daily, but got a C in Linear Algebra, after getting A in all other EE/Math/Phys courses. It kicked my ass once we got to proofs.
 Interesting, perhaps that’s it... I’ve used FFTs a few times, but even FFTs have never been make or break in terms of getting a pipeline or analysis working well.
 The Fourier transform is actually linked with linear algebra. You can think of it as taking a vector in an infinite dimensional Hilbert space (your signal) and decomposing it into its components (the amplitudes of the frequencies).
 The Fourier transform as used in practice in signal processing is ~always discrete, hence finite-dimensional. It's literally "just" a rotation with an especially nice decomposition that allows efficient multiplication.
 A particular fft in practice may be discrete but the relation between fft's of different resolutions of the "same" signal hints at the infinite structure which bundles up all the finite subspaces into one conceptual object.
 Why should you know what they do and why they're important? How does that practically change my R code?
 Because, say, knowing about Fourier transforms can help you write more efficient filtering or open up new ways to view your data--perhaps there's a really interesting behavior in the frequency domain you'd miss otherwise.If you just want to be a statistical script kiddie you do you. :)
 As a younger person (finishing up a Math BS) this resonates with my perspective.IMHO, it comes down to individual beliefs about mathematical realism. Is there anything inherently real about math, or is it just a man-made, arbitrary set of cognitive tools? Is it valid to presume the existence of a Grand Mathematical Framework that can solve any problem a priori? Or, is every problem unique and independent of mathematical developments?From the little I've read about Math history, it seems pretty clear that the Problems came first, and the Mathematics followed. Infintesimal calculus, game theory, etc. were mathematical ideas developed primarily to solve real problems. Then 20th century formalism came along and rebranded much of mathematics under a "clean" framework, while giving little attention to the human environment in which much of it was developed.To me, it is a great shame that abstract mathematical concepts are made further abstract (e.g. in math education) by distancing them from their human roots. Instead of forcing oneself to understand this mathematical "new testament", I think it's far more productive to adopt this sort of irreverent attitude towards math as you describe.Einstein:>"As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality."
 Reminds me of economic rationalism: model makes sense, and conclusions seem logical and compelling - but you can't tell if the model represents reality or even if crucial factors have been omitted.
 Perhaps colleges can start adding an "applied" math major with a focus on subjects that more directly involve the human environment more directly to alleviate the problem you're describing?
 Applied math is definitely a major. Do you mean something else?
 I'm doing some fluid simulation (CFD), and the actual code for finite differences is simple. But the analysis (of stability etc) is more mathy, and I don't feel confident reading other needed papers, because they are couched in math.I mean, I can and have coded it, but can't be sure how it how it will behave in all situations.So I can understand these papers, I'm going back to study math properly. I'm not fully convinced it's really needed (though how could I tell?), but I'm fully convinced it's needed to understand the papers.Math is the latin of CFD.
 I work in Engineering and feel somewhat similar about CFD. There's always some element of doubt lurking in the back of my mind "is it really correct in all circumstances".The most useful info I learnt at university were a couple of equations: Bernoulli's (for general observation about expected pressure drop), Ergun's (for flow through packed beds) and the general laws of thermodynamics.Those are mostly enough to be able to sketch out an intuitive 'guess' about expected behavior in a large range of systems and the underlying math is not particularly demanding.
 Have you seen the Method of Manufactured Solutions (MMS)?You guess ("manufacture") a solution to the PDE. Then you plug it into the PDE. It won't be correct, so you just add source terms to make it correct. You now have an analytical solution for this PDE with those source terms. You can now run your simulation (with those source terms) and compare it with the "correct" solution.Unfortunately, you'll introduce discretization error, because your delta x and delta t aren't infinitessimal. But using the order of accuracy of your discretization method, you know how the error should change with delta x and delta t. So you graph error against delta x, on log-log, and see if the curve matches the order of accuracy. Apparently, it shows up even minor bugs in your code really well.This approach scares me a little, because how do I know my math is correct? I need better appreciation of how the order of accuracy interacts over time as well as space; the behaviour of error summaries; and how log-log plotting works.
 Really interesting, thanks! Yes, it’s certainly been my experience that adding more domain knowledge has helped me a huge amount.
 well you don't need this stuff until you do - some things still need definite, analytical performance guarantees.I'd be pretty nervous riding an airplane that didn't use modern control theory, or going over a bridge that didn't use FEA - or an self-driving car that ran on a raspberry pi instead of a RTOS...
 Why do you think in absolutes? I'm tempted to cite a certain Dilbert... but that would make me appear incredibly rude, which I really don't want to be. Of course it is easy to find examples where it's needed, and it is easy to see so too. So? Did I propose at any point "Nobody needs higher math for anything" or something similar? I don't see a need to argue about an argument never made.
 It's one of those things that you don't notice when it's missing, but probably would help a bit if you knew it. That being said, I have to deal with linear algebra every day, and aside from proofs (which obviously they help with), there have been maybe a handful of times that having a deep knowledge of eigenvectors and eigenvalues has helped significantly. Once or twice though, I've got massive speedups (>500x) just by knowing how to do the same thing in a more efficient way.My feeling is having a basic knowledge of testing/caching/memory management is way more useful when you're doing large image analysis.
 Interesting, well I’ll try to keep reviewing this stuff and hoping I find an application.I really would like to find an application in my work, because without that I find new techniques don’t really stick and after a few months I forget them...
 It depends on the field you're in. For example, if you're in an area that heavily uses differential equations (many engineering disciplines) then you're probably gonna be using eigenvectors a lot, as they are important for solving a lot of problems. Other areas may not need them at all. It also depends on your depth in the field. A rank and file engineer may not need to know anything about them - they underpin a lot of numerical methods, but get hidden away in software packages. Someone developing those software packages likely will, though. Techniques based on eigenvectors and eigenvalues are extremely important in my field (nuclear engineering... you've probably heard the term "critical", that refers to an eigenvalue), but I know someone who is an excellent civil engineer and knows next to nothing about them (or linear algebra in general) because they aren't that important for what he works on.Forgetting stuff you don't use is pretty normal, the important thing is to be able to recognize when a technique you don't remember the details of might be applicable, and to know where to look to refresh your memory.
 I'm the exact same way, my job is really heavy in linear algebra, so it sticks more easily for me.Usually I go through the code and ask "what am I trying to do here" and "can I do this a better way". A lot of the aforementioned speedups have come because the previous developer was obviously trying to do something, like create a linear projector, but were following some sort of math formula, so made a bunch of extraneous matrices that were huge.It's a simple fix, but adds up when you're dealing with massive datasets.
 It comes up all the time when trying to build second order optimization methods. With the eigensystem of your objective in hand you have a complete understanding of the (non-) convexity of your energy landscape, which is useful to ensure you always have good search directions, etc.
 What kind of novel dataset are we talking about?Just from the top of my head, you could have encountered eigenvectors/eigenvalues:- if you ever used spectral graph algorithms- if you ever done dimensionality reduction via principal component analysis- if you ever calculated the steady state distribution of a Markov chain
 Image analysis and signal corrections on time series data. Extracting features from microscope images for example, correcting for signal convolutions.
 It is certainly frequent in engineering. If you need to analyze the stability of an electrical grid, there isn't much alternative.A fun book on this is https://openlibrary.org/books/OL2398351M/The_algebraic_eigen...
 Just my anecdotal experience, I've seen it come up in interviews for an algorithms group at a medium sized biomed company.
 This is frightening but believable. I've worked with a few "quants" who stared at me doe eyed explaining eigen* and basic calculus concepts to them in the context of why their calculations don't add up. You mention you've used fourier transforms before - if you don't understand an eigenbasis then you don't have a fundamental understanding the math you're deploying.
 > You mention you've used fourier transforms before - if you don't understand an eigenbasis then you don't have a fundamental understanding the math you're deploying.That's a bit uncharitable. A fourier decomposition can absolutely be understood as an explicit bag of calculus tricks, with no loss of precision or generality. And an awful lot can be done with just those tools -- you don't need to explain JPEG compression or VLBI astronomy in terms of eigenvectors, for example.Obviously (heh, "obviously") it's true that the space of decomposed functions form an orthogonal basis, so technically we're "really" operating in a linear space and that has expressive power too. But there are lots of ways of looking at problems.To wit, you're not wrong. You're just... Well, you know.
 Dude your post is orthogonal
 In the context I used it, FFT wasn’t really of fundamental importance. Using FFT (and in particular FFTW) gave a performance improvement (execution speed), but no real advantage in terms of accuracy over an alternative naive method...So... yes I guess I’ve just not seen anywhere in my work where this stuff has proved useful...
 It's pretty much ubiquitous in any quantitative field... would not even know where to start
 Interesting to see this back on the front page after three years. Still remember us sitting in our living room drawing this on paper and arguing about the right approaches.Maybe one day vicapow and I will make a triumphant return to the explorables space, but life has a way of getting in the way as you get older.
 Eigen{vectors,values} seemed like this totally arbitrary concept when I first learned about them. Later it turned out that they are actually really awesome and pop up all the time.Multivariable function extrema? Just look at the eigenvalues of the hessian. Jacobi method convergence? Eigenvalues of the update matrix. RNN gradient explosion? Of course, eigenvalues.
 I highly recommend 1Blue1Brown's Essence of Linear Algebra series[0] to highly grasp and comprehend linear algebra.
 This is truly a favorite of mine.
 Classic paper on Google's PageRank: "The \$25,000,000,000 eigenvector"
 Am I the only one with [Math Processing Error] all over this source? Ctrl+F gives me 57 instances of that string
 I see that on some pages. Try reloading.
 i didn't see anything like that. possibly a mathjax.org hiccup?
 Quantum mechanics should be listed as another reason to learn about eigenvectors and eigenvalues! =)
 Beautiful demos/explanations. Would've been really handy back in school.
 Great interactive demos.
 There's nothing to see here.
 what do you mean? this seems like a strange statement, given the context.
 The visual explanation movement falls flat for me. It's like trying to understand Monads through blog posts. It's great if you already understand the concept to develop your intuition, or if you've never heard of the concept to pique your interest, but it won't help in the intermediate area where you know what you want to know but don't understand it fully. I need to build proofs through incremental exercises to grasp these concepts.
 As someone who understands eigenfunctions already, I don't understand the pictures either. Here is the best way to think about it: a matrix is a transformation, a composition of rotation, scaling, etc. Eigensets are lines going through the origin that the matrix moves points along. So a rotation would have no eigenvectors because none of the points move in a straight line, while a scaling along the x axis would have an eigenset that was also along the x axis, consisting of the points that were moved straight up or down.To imagine finding the eigenset, just ask, could I draw a line through 0,0 such that any point I put on it would stay on it after the matrix acted?
 > So a rotation would have no eigenvectors because none of the points move in a straight line, while a scaling along the x axis would have an eigenset that was also along the x axis, consisting of the points that were moved straight up or down.Maybe I'm being pedantic, but rotation matrices have eigenvectors and eigenvalues, but the eigenvalues are imaginary because imaginary numbers are rotations in the complex plane.It's exactly like saying x^2 = -1 has no solutions: It has two solutions, like any other quadratic, but neither of them are real.In three dimensions, rotation matrices have three eigenvalues, one of them being 1, and the eigenvector corresponding to that eigenvalue is, naturally, the rotation axis.
 > So a rotation would have no eigenvectorsRotations have eigenvectors: a 2D rotation has two complex eigenvectors, a 3D rotation has one real and two complex eigenvectors, ...
 That's a fair and true catch, but I can cover myself by pointing out that the article was only talking about matrices in R^(m x n). ;)
 When your field of interest is the reals, those complex eigenvectors don't matter.
 I want to know it. I have to manipulate the mathematical objects computationally using proof techniques to know it. That just takes time. Thanks though.
 Does this gif help? https://commons.wikimedia.org/wiki/File:Eigenvectors.gif
 No gif will help. There is no visual explanation that will help was my point.
 Surely visual information helps with geometric problems? Our geometric intuition basically evolved to predict the relevance of light patterns on our retinas, and geometry is a language designed to encode these intuitions, so there's no reason to think visual tools presenting geometric facts would inherently fall short.
 And yet I still think that. What's up? I'm broken. Good to know.

Applications are open for YC Summer 2019

Search: