Hacker News new | past | comments | ask | show | jobs | submit login
Unlearn rotation matrices as rotations (kodkodgames.gitlab.io)
258 points by dosshell on Aug 4, 2020 | hide | past | favorite | 124 comments

"Don’t think of them as rotations, think of them as a unit vectors of a new coordinate systems."

This makes it clear to me that many people have been very poorly served by their linear algebra courses if that is not obvious.

I'm one of those people. I really struggled with my undergrad linear algebra courses. It was taught in a very abstract way with no suggestion ever that any of this had useful applications. I passed (barely) and was glad to be free of it.

Years later when I got into graphics programming I found myself dealing with linear algebra concepts but this time it made much more sense to me. Being able to apply the ideas to real problems with visual solutions made all the difference.

I went through Stephen Boyd's "Introduction to Linear Dynamical Systems", and at some point he said something like "Eigen vectors are just the directions linear systems grow in", and bam!, many years of linear algebra came rushing back and everything made sense.

I had a very similar moment when I was thinking about why the determinants of all (non-identity) projections are 0 and the fact that the determinant is the product of the eigenvalues.

> the determinant is the product of the eigenvalues.

Ohhhhh, now I get determinants. I just thought they were some easy way to classify a matrix, not that they had any special meaning. Not that they were also all the eigenvalues multiplied too.

That's one concrete way to view determinants, but it's not _the_ way to think about them.

Determinants are kind of abstract and even inelegant. This is because it's defined in terms of coordinates rather than abstractly. A coordinate free definition exists, but it's very complicated.

Ideally they would not be taught at all, but it's useful for doing exam problems. Cramer's rule in particular is quite useful when solving simultaneous equations in engineering under exam conditions, for example. Even in computer/engineering applications, row reduction/diagonal normal form type are used instead.

Determinants are a way to get a scalar number out of a matrix. You have linear forms, bilinear forms, and multilinear forms, the latter of which determinants fall under.

There are certain properties you would like the determinant to satisfy, such as det(AB) = det(A) det(B), so that the determinant can give you an idea of whether a matrix is invertible. Eventually you work out that a determinant _has to be_ a multilinear form with certain properties (alternating, skew symmetric).

Again this is all very complex and messy. Initially 'linear algebra' was just the study of determinants, because people thought of math in terms of solving equations. Linear algebra, as it's known today, came later.

Bilinear forms turn up all the time, so ideally you would first learn bilinear forms, then learn the properties a bilinear form can have (e.g. alternating, skew symmetry, etc), and then learn determinants in that context.

You went through Axler right?

It's a respectable idea that determinants should not be as important as they are on a pedagogical point of view imo. But they can be extremely useful.

From my point of view, matrices are a bad abstraction in general. For example if we have a linear or a bilinear form, both can be written as matrices of the same size, we can no more differentiate the two objects.

The trouble comes when we want to apply some transformation over those matrices.. as they do not use the same transformations laws.. Namely A^-1.L.A and A^T.B.A

This is a very usual error in computer graphics when we want to make some transformation on a 4x4 matrix and we do not know which kind of object it is.

I'm vaguely familiar with Axler, but I read the beginning of Halmos (finite dimensional vector spaces)

Halmos are always good books. I didn't know he had this viewpoint on determinants though.

What I meant is that on a pure mathematical point of view they are seen as inelegant as they require a basis to be defined. I don't have a problem with that when you are doing maths on manifolds etc..

From a practical perspective, a lot of engineering problems go from 1 to 4 dimensions and are basis dependents. And the determinant becomes a useful tool.

He doesn't have a particular view on determinants, I got my views on determinants more from Axler than from Halmos. But Halmos' book is linear algebra as preparation for functional analysis, so he tends not to choose a basis for proofs.

I agree with you, that in practical paper computations (e.g. on exams) that determinants are an indespensible practical tool.

Determinants aren't inherently abstract, they're quite practical, especially in the article's context: they tell you the scaling factor of the transformation the matrix represents, and whether it does a reflection.

E.g. the determinants of the rotation matrices in the article are all 1, because they don't scale or reflect anything, just rotate.

The matrix {{1,0,0},{0,-1,0},{0,0,-1}} has a determinant of 1 too, but it reflects along two different axes. So there's no way to distinguish from the determinant between the identity and between a reflection across two axes.

I think the important detail is if you're changing from right-handed bases to left-handed or vice versa during the transformation.

> So there's no way to distinguish from the determinant between the identity and between a reflection across two axes.

I fail to see the point you tried to make. Both transformations preserve volume. Why did you expected volume to change if you rotate something?

Parent comment claims you can use determinant to see if a reflection is occurring, I disagreed.

I don't know where you thought I expected volume to change if I rotate something.

Two reflections make a rotation. Your matrix is a rotation matrix that does no reflection (everything will be oriented the same way after being transformed).

You're right, cheers.

> Determinants are kind of abstract and even inelegant.

No they really aren't. Determinants represent the change in volume in a transformation.

> Ideally [determinants] would not be taught at all, but it's useful for doing exam problems.

This is outright ignorant and wrong. Determinants are fundamental to a myriad of central concepts in calculus and also in engineering fields. For instance, volume integrals or any integral over a parametrization rely on determinants.

I did degrees in both math and engineering, please give me the benefit of the doubt that I'm not making a 1st year university level mistake.

In the specific examples you give, 2-forms could be introduced instead to yield the final formula for the Jacobian. I don't think this is an overkill, because it's possible to use 2-forms without precisely proving the theorems etc about them. This is more convenient for math/physics people, and engineers care more about the final formula than about how exactly it was derived.

As another comment here pointed to, the idea that linear algebra is better off without determinants was popularized by Axler - I'm not making this opinion up out of nowhere.

As a practical example of what I would prefer be taught on the undergraduate level to e.g. engineering and physics majors:


math majors would probably supplement this with additional material

Well shit, now I'm confused again.

And the trace is the sum of the eigenvalues.

I've been looking for a course on this topic. Would you recommend it?

I was taught linear algebra largely through various physics courses I took through undergrad. Not necessarily the most traditional way, but I think that it should be taught that way more frequently - with physics, there is something concrete to grasp which can be helpful to a lot of people.

I am not sure about this. It looks more intuitive and I kinda agree that it would be more intuitive and easier to grasp.

On the other hand though, wouldn't you lose the generality? Part of my undergraduate geometry and linear algebra course was on fields, groups and rings and that was built in order to show that once you can define a certain set of operations on a mathematical structure you can do "math" on such structures "as if" they were regular numbers (limitations apply).

Think of matrix fields (not sure if it would be the correct translation, in Italian they were called "campi matriciali").

That was kinda mind blowing to me but at the same time it made perfect sense because of the generic definitions underlying.

Not sure how you could go ahead and explain that with regular 3d-world physics.

In physics they make sure you know that you can't make a mirror out of rotations, too.

But interestingly, you can make rotations out of mirrors.

At my university they rolled differential equations and linear algebra into the same course. It ... wasn't very good.

I had that as well. IMO it was bad because of the lecturer, clearly a TA who really wanted to be doing something else. That was the semester I dropped out and started a company.

I technically passed a linear algebra course, but developed zero intuition for any of the facts I memorized. As a result, I no longer remember any of it other than something-something eigen-matrix.

I've never really needed to use linear algebra, but I did take a class on it recently, so...

Why does it matter that it's unit vectors in a new coordinate system? I don't really care about the new coordinate system, I care about rotating something to this position in my old coordinate system. Saying "oh yeah it's <1,1,1> in this coordinate system" doesn't really help because I still need to translate <1,1,1> in the old coordinate system to <1,1,1> in the new coordinate system... and now I've got the same problem again.

I mean, I get that a rotation matrix, or any matrix for that matter, represents unit vectors in a new coordinate system. If I got nothing else from that course I at least learned that a matrix represents like 50 different concepts like systems of equations, coordinate spaces, etc. A big part of my course was applying rules for one representation of a problem to another equivalent representation to a problem. What I never understood is why it matters how you're representing the problem. Identifying that a rotation matrix is just unit vectors in a new coordinate space doesn't help me solve my problem of wanting to rotate something, yet it felt like 75% of my course was about converting problems from one representation to another.

In this case, the idea is to make it easier to debug by looking at it. If you work in 3D graphics, usually looking at euler angles (ie, rotation around x,y,z) is pretty intuitive, but it has a lot of issues with gimbal lock, order the rotations happen in, etc. Quaternion's fix that, but they're very unintuitive to look at (well, unless there's another tricky like this I'm unaware off -- I'm sure there is). On the other hand, this trick with the rotation matrix lets me represent an arbitrary rotation, but it's also pretty easy to take that representation back to something I can understand.

In 3d graphics (or physics or what have you) you're constantly moving between coordinate systems/frames of reference because certain things are very easy to represent in a local coordinate system but are difficult to work with in a global one

I think the idea is to recontextualize the coordinate system of your object as not being the global coordinate system in the first place. The matrix defines the orientation of your object relative to the global coordinate system.

The question of "how to rotate something" rests on operational semantics. The question of "how is this related to that" rests on denotational semantics, and is much more general. The beauty of linear algebra is how closely the two are related: every linear transformation (the relationship) gives a matrix given a choice of basis, and matrices have a direct algorithm for transforming coordinate vectors.

But that is actually very useful problem solving. Identifying a problem in your hand as identical to a different problem you know solutions for, allows you to solve it effortlessly like one of those impossible smith puzzles with knots and strings.

> "Don’t think of them as rotations, think of them as a unit vectors of a new coordinate systems."

That's exactly what all square non-singular matrices with unit column spaces are. In fact, all square non-singular matrices are coordinate transformations. That's pretty much the central point of linear algebra.

Framing rotation matrices as changes to the coordinate system doesn't add much info like framing polynomials as continuous functions. Technically it's true, but it doesn't add much to clarify.

Recently I worked on a project involving rotation matrices. Engineers were worried about matrix multiplications. I told them it's just a dot product with each of the 3 unit vectors... Sighs of relief all round.

Even if you didn't learn this from linear algebra, doing any measure of graphics programming at the matrix level very directly reveals these sorts of associations as you see how the numbers accumulate, and get everything debugged. Children learned these concepts, too, as they hobbied around with programming rotating visuals. It doesn't require advanced math, it's just a very practical perspective.

SNES programming also directly introduces people into the notion of 2d vectors advancing over an original axis, which ends up in a rotation in mode 7 stuff.

I would emphasise one more thing in connection with this: scalar products being projections onto the new axis and the matrix multiplication (so called rotation) of a vector is just calculating those scalar products. Now you can forget all those confusing angles altogether.

I remember this being one of the huge "aha!" moments in my linear algebra course. I'd been doing 3D graphics programming for a while at that point and suddenly all the mechanics behind the calculations snapped into place.

many people did not learn about them from a linear algebra course

They probably skipped classes to build their new app’s prototype. I wouldn’t even say it’s worse, most coder won’t need linear algebra.

Having a breadth of knowledge is one of those things you never realize the value of, if you don't have it. That's something I realized over the course of my career.

For instance, I have a broad math background, nothing too deep, but enough to have an intuition on how complex things worked. One of the defining projects of my career to date was an embedded signal processing project given to me because "it seems like you know math" -- and I figured it out on the job by intuiting my way through. I figured out Fourier analysis and windows and FIR filters by intuition, and was able to use the pieces I had available as API to do something truly impactful.

Most people won't need most things from school, but if you're the one who knows that one thing that's needed that one time, it can be disproportionately rewarding.

If you ever want to really do math at some point in your life, I think you need linear algebra or something equivalent. Better to build the intuitions earlier in your life.

> Rotation matrices just describe the unit vectors of a new coordinate system

More generally, _every_ matrix describes how a change of coordinate system should happen. For any m x n (m rows, n columns) matrix, each of the the n column vectors represent how the current coordinate system (which may or may not be unit vectors) should be represented in the new coordinate system (for a left multiplied matrix).

Whenever a matrix's determinant is zero, it means that you squashed some dimensions. As you can imagine, when m is not equal to n, there will always be dimension squashing. Even when they are equal, that can happen. If you take a 3 x 3 matrix and it transforms all the 3-D vectors into only planes (which are 2-D objects), the determinant will be zero. This would be stated as having a rank of 2. More simply, you'd say the "volume" of the transform to be 0 (because planes have zero volume).

EDIT: Make sure to read important clarifications by JadeNB below.

> Whenever a matrix's determinant is zero, it means that you squashed some dimensions. As you can imagine, when m is not equal to n, there will always be dimension squashing.

Although these two sentences are correct individually, it may be worth it to emphasise that they should not be read together: the determinant is defined only for square matrices. (One can do something like computing the determinant of det(A^{adjoint}A) if you want a general numerical invariant that behaves as you like—and then we're getting in the neighbourhood of the SVD).

> If you take a 3 x 3 matrix and it transforms all the 3-D vectors into only planes (which are 2-D objects), the determinant will be zero.

Although your meaning seems clear, I think you may have misspoken slightly. Of course 3D vectors are transformed by a 3 x 3 matrix into vectors, not planes. To me, what I think you mean would read better as "If you take a 3 x 3 matrix and there is a plane such that all 3D vectors are transformed into that plane, then …", or, perhaps even better, "… such that the transforms of all 3D vectors lie in that plane …".

Those are very important points. I've edited the parent to point out this clarifying comment.

Thanks! I should also have mentioned that, assuming we've got an m x n matrix acting by multiplying vectors fed to it on the right, dimension squashing is guaranteed only if n is less than m, not whenever they are unequal. (I guess one could argue that, in the case where n is greater than m, dimension squashing happens if you feed in vectors on the left.)

Yes, I did mean dimension squashing in either direction - left or right.

> The first column of the rotation matrix is the new x-axis expressed in the old coordinate system, the second column is the y-axis and so on. An identity matrix would yield in no rotation since all unit vectors would be the same as the previous coordinate system.

It's a shame that this isn't made more clear in most tutorials and classes. The idea of axis application order really does make things more confusing than is needed.

I remember that in my freshman's year, my linear algebra teacher explained that every matrix (i.e., not only rotation matrices) has this property: each column is the result of the operation on each basis element in the old coordinate system. This is true even for rectangular matrices.

Once you get this, you can start building matrices for any (finite dimensional) 'vector space' not just the standard 'vectors as a list of numbers'.

All you need to do is to pick a basis for your vector space, and you can start to represent linear transformations as matrices w.r.t. that basis. Even cooler is when you let go of specific basis choices, and start talking about properties of linear transformations that do not depend on a choice of basis. Things like a determinant, eigenvalues, etc.

I've spent the last year correcting for the poor linear algebra taught in my undergrad. The gaps and inconsistencies left by those classes are a lot harder to repair when you've got a full-time job and kids.

What do you suggest as a good text for learning it right the first time?

I actually suggest finding a book about a topic that actually makes use of linear algebra , since the intro chapters will actually explain the core concepts in a way that is actually useful.

For me, linear algebra didn't click until I read "Numerical Linear Algebra" by Trefethen & Bau. The first 4-5 sections have great explanations of matrix vector operations.

Other than that, the most popular linear algebra book that actually gets things right is "linear algebra done right" by Sheldon Axler.

Edit: ah, forgive me, didn't notice you said "first time". In that case the Axler book is a safe bet, just be diligent doing the exercises.

I found Gilbert Strang's MIT 18.06 lectures on youtube and associated course materials very useful.

I had an incredibly difficult time understanding his linear algebra textbook until I watched his lecture videos.

Yeah, his lectures are very well done. Starts with basics and builds up nicely.

If we relax the restriction that the columns of the 3x3 transformation matrix must be unit vectors, we also get the ability to apply scaling and shear transformations. In solid mechanics, this is called the F tensor, and is used to describe deformation of materials [1].

There is also an augmented 4x4 form if translation is needed as well:

  | Q ...  Δx |         | vx |
  | .      Δy |         | vy |
  | .      Δz |   dot   | vz |   = new vector
  | 0 0 0  1  |         | 1  |
where "Q" is the 3 × 3 transformation matrix (e.g., rotation matrix) that is the subject of the OP, and [vx, vy, vz, 1] is the augmented form of a vector "v" = [vx, vy, vz] that is being transformed.

The augmented form is especially useful for transforming voxel indices in an 3D image array to spatial coordinates, such as for MRI or CT image data [2]. It is helpful if the index coordinate system has its origin at zero (zero-indexed arrays), like a normal coordinate system.

Finally, if we have many [x, y, z, 1] data points that each have an old and a new position, we can compute the overall best-fit transformation from the old to new positions with the least-squares solution to

  A x = b
where "A" is the (unknown) augmented matrix representing the transformation, "x" is a 4 × n array storing the points' old augmented-form positions, and "b" is a 4 × n array storing the points' new augmented-form positions. If we allow the points to have arbitrary dimension, without any particular spatial interpretation, we get least squares curve fitting and the foundation of machine learning.

[1] http://homepages.engineering.auckland.ac.nz/~pkel015/SolidMe...

[2] https://nifti.nimh.nih.gov/nifti-1/documentation/nifti1field...

What you describe is very standard in 3D graphics, and it's alluded to in the article by the question at the end. You can even use the last row of the matrix as scaling factors (you'll have to re-normalize the output vector).

From a mathematical standpoint, this is now a projective vector space.

Some references:




Note that projective spaces are not vector spaces. There is no meaningful way of adding vectors nor multiplying by a scalar.

Meaning we have to be very careful about treating our projective transformations as linear maps, as they are, in fact, nothing like that.

Of course people doing graphics often pick a section of the projective space ("last component is 1"), then map that onto R^3 and off they go, sort of ignoring the "planes at infinity" and praying for the best. This is messy but I do not see a better way.

Your wikipedia link reminds me that I should have listed rotation, scaling, shear, and reflection transformations. Oops.

In your second link, I like the trick of storing positions as [x, y, z, 1] and directions as [vx, vy, vz, 0].

Regarding, "Hey, Markus! How come this matrix is 4x4?", I'm not sure if Markus was referring to an affine transformation, part of a perspective projection, a matrix representation of a quaternion, or something else. The mere fact a matrix is 4x4 (or some other size) unfortunately doesn't say much about what it's meant to do. Which probably contributes to Markus getting so many questions.

I was eight years in to professional videogame dev. When I decided to debug matrices used to generate racing track surfaces by drawing the columns in the game world when I realised they were X/Y/Z axis. I felt so stupid. I jokingly told the other coders around me how stupid I was... turns out we were all stupid.

One old timer laughed in the background and told us how he discovered this on the Amiga a few years before.

I learned this concept by watching 3blue1brown's series on Linear Algebra: https://www.3blue1brown.com/essence-of-linear-algebra-page

Would highly recommend. I feel like Grant's videos gave me a better understanding of Linear Algebra than the course I took in college.

The 3b1b videos are really high quality, but I don't find them particularly useful when learning new material. The fancy animations distract and mesmerize, I slip into a mode of being entertained, becoming mostly a waste of time.

For subjects I've already learned however they can be useful for gaining new visual perspectives.

On the topic of learning rotation matrices / linear algebra in video games, I'd strongly recommend the Handmade Hero youtube channel. Casey explains these subjects at length in multiple videos, using plain chalkboard-style drawings I personally find far less distracting.

They give the "feeling of learning", which is worth only a little. They also give the motivation to learn, which is invaluable.

Second on Handmade Hero. It's great.

I would heavily second this. I would also recommend, for those who are interested in computational linear algebra, going through fast.ai's course when you finish 3blue1brown: https://github.com/fastai/numerical-linear-algebra/blob/mast...

Could you expand a bit on why this is a good course? Lots of great linear algebra stuff out there, why this one?

Sure! The short version is that in this course, as in all of their other courses, I find they do a near-perfect job of contextualizing information as they teach it.

To expand a bit more, I usually don't enjoy resources that emphasize how "practical" they are, because I almost never "learn" anything from them. They teach procedures, not concepts.

This course, and fast.ai's other courses, are different in that they still approach the subject matter in ways that feel tangible and "real world," but they are doing so in a way that reveals and helps you learn the underlying concepts—it's just done in a top-down manner.

YMMV of course, this has just been my experience.

Blue Brown is really a great resource. His visualization library looks really nice too. He clearly spends a lot of time thinking about how to explain things. A lot of colleges could learn a lot about how to teach from people like him.

This series taught me all sorts of things I never groked from years of university education.

Also recommended: https://natureofcode.com/book/

Funny. Only some months ago I needed to figure out how to represent unit vectors of (a rotated) coordinate system in another coordinate system. After some days of drawing on the paper I came to the conclusion that actually the rotation vector columns are just that. But found no way to verify that claim in the internet. Thank you for this.

That's not only true for rotation matrices. The columns of any matrix are where the unit vectors in the original coordinate system are mapped to when multiplied by the matrix.

A really neat thing I learned in physics was Bra-Ket (get it bracket?) notation.

Ket's are vectors and written like |u>. Bra's are objects that compute dot products with vectors. They are written like <u|. So <u| |v> = u · v.

However if you have |u> <v| then nothing happens. This is a new object. It will first take dot product with v and then multiply with |u> vector.

Letting e1, e2.. denote the coordinate basis vectors, we can write a matrix A with coefficients a_ij as the sum Σa_ij |ej> <ei|.

This can be rearranged as

Σ_i |Σ_j a_ij ej> <ei|

Written like this, it is clear that the column is where the unit vector ends up.

What are colleges teaching these days if not this??

For me I think the difficulty was just reconciling reality with the concept of a coordinate system. I think it was just some abstract concept that I considered to be out of my cognitive reach of understanding. I just had no way to anchor anything being said to me to a concept I had in my head at the time.

I remember this poor teaching fellow where I went to Lehigh University where I studied Mechanical Engineering. He was foreign and had the honor of teaching us Linear Algebra. He must have noticed that none of us had even the slightest damn clue what he was talking about. So, the class before every exam, he would literally just give go over the exam problems nearly verbatim and solve them. I came out of that class with a good grade and not the slightest fucking understanding of Linear Algebra beyond the very superficial algebraic laws of multiplying tensor objects (i.e., scalars (0-D tensor), vectors (1-D tensor), and matrices).

I'd like to thank Grant Sanderson (3Blue1Brown) and Mike Cohen (Udemy Linear Algebra course using Matlab and Python) for teaching me linear algebra. And don't forget Gilbert Strang, of course!! (find his course on MIT OCW) The visualizations of the tensors transforming provided by the first two are what really made Linear Algebra start to click for me. I want to go back to Gilbert Strang's course now that I have a better geometric understanding of what is happening and appreciate all his wisdom on the subject.

As a physics major at the time, I think that learning physics in addition to multivar and linear algebra is so handy in terms of providing reference material for intuitions.

Also, learning it as part of a numerical computing stack (like python) where you can experiment and visualize is huge for building intuition.

I don't know, my college days were a couple of decades ago, so I have already forgotten plenty. But if you can point to a good source in the internet that explains this, I could try to see why my research efforts failed.

Here's wikipedia. Granted, the mathematicians behind wikipedia seem to be focused on making everything as notationally obscure and hard to understand as possible, but it outlines the concept https://en.wikipedia.org/wiki/Linear_map#Matrices

Well, I have to say that if that is the most approachable and easily googled content in the internet to figure this out, I am not too surprised I had to figure it out on my own...

How to compute a matrix product, how to compute the solution to a system of equations, how to compute a determinant (inefficiently), how to compute eigenvalues

Lots of how, not enough why

I'm glad you appreciated the text.

I remember asking in freenode irc channel ##math that why matrix multiplication is defined like that, why not in any other way. I was told that:

>> cuz the composition of linear maps are defined like that.

It took a long time to understand what they meant. Still I see how it is taught to work out mechanically. Brave friends of mine went to the Wikipedia page to come back with more confusion. Wikipedia is a great reference when you know the subject well enough.


> Wikipedia is a great reference when you know the subject well enough.

Some math pages on Wikipedia are great. Some are... horrid. A while back (2017?) I couldn't remember the terms in the Taylor expansion of sqrt(x), so I went to Wikipedia. The article was one of the most jumbled mishmashes I'd ever tried to read. But, even though I'm qualified to fix it, I didn't dare wade into Wikipedia politics. The rewrite would have taken a while (it's a long article), but I can't even imagine how long shoving it through would have taken!

I want to share my favorite linear algebra textbook (Linear Algebra Done Right by Axler)


It's meant for a second course in linear algebra and the focus is on abstract vector spaces and linear transformations (rather than a table of numbers perspective). It also doesn't use determinants until the last chapter.

LADR is probably the best linear algebra book around. It's a bit of a shame that the "marketing" mostly involves the fact that it does not use determinants.

The last edition also contains chapters on stuff like dual spaces and quotient spaces, which are important yet very often missing in other textbooks.

That being said, his reliance on C/R in about half the book could be a problem for people with CS background. Like, what if I want to compute determinants of linear maps which are over Z_2? There are also no chapters on number wrangling (like the gaussian elimination and friends), which is a big plus in my view but possibly a problem for others.

I think these are great points. I justify the lack of some of the fundamental concepts like Gaussian elimination to be due to the fact that the book is meant for a second course in linear algebra. My copy has dual and quotient spaces and I think these are important topics to get an introduction to if you're looking to learn functional analysis, topology, etc. Some discussion of finite fields could be interesting. This has prompted me to look up linear algebra over finite fields.

It’s clearer in 2D. Rotating by angle `a` tilts the x-axis to (cos(a), sin(a)), because that’s the very definition of cosine and sine (i.e. the x and y components of angles on the unit circle). The y-axis just flips the components and negates one, because that’s what 90° rotations do.

This is so important! I spend my life writing EKFs for navigation systems however did not have a good linear algebra education. It was a hard struggle! Now, every time I have a new hire or intern, the first thing that I go over is basic linear algebra and rotations including DCM, Euler angles, quaternions, and so on. I also make sure they understand it graphically such as how you can scale and project vectors using a matrix.

Hi, do you mind if I reach out to you? I am working on similiar projects and somehow every filters I tried have some issues.

Sure, just send me an email.

I thought I knew what rotation matrices were until I read this article. Now I am confused. I used to think I needed a 3x3 matrix to rotate a point (with x,y,z coordinates) in 3 D space. But the article refers to Rx,Ry and Rz EACH of which are a 3x3 matrix.Why are there 3 matrices? To move an x coordinate to a new coordinate system, I only need to displace it among the the existing three axes. So each coordinate only needs a 3x1 vector to displace it . That a makes a single 3x3 matrix.

Any number of successive rotations can be represented as a single matrix. Both are right at the same time: an arbitrary rotation (represented by a single matrix) is a combination of 3 rotations about the axes of the reference frame (represented by 3 matrices).

R_x, R_y and R_z are the basis to generate all rotations. Think of them as the equivalent of the basis vectors in 3d euclidian space: e_x = (1, 0, 0), e_y=(0, 1, 0) and e_z = (0, 0, 1)

> I only need to displace it among the the existing three axes.

That is exactly what these three matrices do. R_x depends on a single parameter and tells you how much you rotate around the x-axis. R_y tells you how much you rotate around the y-axis and R_z for the z-axis. Any rotation can be written as a product of these three (which then is a single 3x3 matrix depending on three parameters).

Thank you to all who replied. My clue should have been the use of the same theta instead of alpha, beta and gamma.

Those 3 mats Rx Ry Rz are just 3 separate examples from Wikipedia of how to construct a rotation matrix where the rotation axis is in the x, y, or z direction. They are separate rotations that don’t have to be combined.

Those three matrices describe three different rotations. Rx is a rotation about the x axis, Ry about the y axis, Rz about the z axis. They're parameterized by an angle theta that describes how much to rotate.

Rx rotates about the x-axis (in the yz plane), Ry about the y-axis (in the xz plane), and Rz about the z-axis (in the xy plane).

We need all three because none of them can be made from the other two, but any other rotation can be made from a combination of these three.

I'd say quatenions are a nicer way to represent rotations, but 4x4 matrix can describe any projective transformation in 3d.

Two remarks:

1. > Right handed, z forward through the nose and x through the left ear.

Interesting, in aviation engineering the convention is different: the body frame has x forward (positive rotation around x is right roll), y to the right (... pitch up), z down (... right yaw).

2. The best reference (reference, not explanation/textbook) for this whole spiel (rotation matrices, Euler angles, quaternions) I've seen is a paper by Diebel, Representing attitude: Euler angles, unit quaternions, and rotation vectors


(Or maybe I like it because "Diebels Alt" is my favourite local beer... https://en.wikipedia.org/wiki/Altbier)

Speaking of unlearning rotations, has anyone looked at https://bivector.net ? Some other hacker posted it a while back on another thread and I found it to be really intriguing.

This is a great clarification. It's a shame that Wikipedia is such a bad reference for learning maths. I think it is a result of most maths articles being edited by people who just learnt something and therefore don't understand it enough to explain it well, and also want to show off their knowledge rather than actually explain things.

Mathworld is much much better in general.

(Somebody's probably going to reply "Wikipedia is an encyclopaedia not a tutorial".)

The matrix article is really unusually bad.

One of the problems, I think, is that there are competing "views" of linear algebra. Mathematics students are taught a different class than everyone else where vector spaces and linear functions are brought to the fore. Everyone else seems to get a more computation-focused course. So there's a tension there where if you write a math-style article about matrices where this fact about the columns would probably be the definition of a matrix, everyone else might not even recognize what's going on.

There's also the issue that the current article is so sprawling that refactoring it to subject it to a unifying plan would require a large rewrite, and that's hard in an environment like Wikipedia.

Wikipedia is great when you already know a math topic and just need to look something up, but is terrible at teaching math. Generally the level of abstraction is too high and too general for a first introduction to a topic.

I find that sometimes even when I'm familiar with a topic, it ends up being a bad reference. This is especially true when they start introducing variables and constants that haven't even been defined.

For what it's worth, I find that Wikipedia is a great place to learn about more advanced topics in abstract algebra / analysis / category theory. For example, the representation theory [1] page gives some pretty good context for how it relates to other fields.

It's not the same as a textbook, but generally, after reading the Wikipedia page, I know enough new terminology to intelligently Google for more complete resources like lecture notes, papers, etc..

[1] https://en.wikipedia.org/wiki/Representation_theory

This is absolutely correct.

Wikipdia math pages (and for some sciences too) are correct in the details and yet provide no insight.

It's almost backwards what you want from an encyclopedia, which should be an overview with references to more details.

Well, the insight each person needs varies.

i have also encountered this problem. i learned the nature of rotation from first principles in order to solve these problems without documentation... which has turned out to be a massive boon.

tutorials are the devil. they do not teach so much as demonstrate a lot of the time.

its a sad place to be

There's nothing wrong with understanding rotation matrices as rotations, it's just that in their application it's not the right mental model. Ideally you'll understand both representations and switch between them as needed.

The way I think about it: any matrix multiplication consists of the linear combination of the column vectors of your transformation matrix, with each column vector 'weighted' by the the input vector's x, y, z, ... n values.

I think the biggest struggle with rotation matrix isn't the theory but the nomenclature. Is it R(ZYX)?! Is it rotation axis rather than Euler angles?! All the convention differences can make signficant mistake very quickly.

For some time I was surprised that anyone would represent rotations by anything else but rotation matrices. I sometimes do find wikipedia useful for quick answers for maths, however agree its quite disorganised.

Is there a good web app where you can play around in a visual 3D space with transforms like this and other ones like quaternions?

I’ve encountered this misunderstanding many times, usually in the context of creating a rotation matrix for a given direction and “up” vector. People look baffled when I replace their elaborate Euler-rotation algorithm with three lines of code:

    z = direction
    x = up.cross(z).normalized()
    y = z.cross(x)
    M = [x, y, z]

Could you explain this is more detail?

A matrix-vector multiplication (or any other linear operation) can be understood as a change of basis, with the matrix rows or columns as the new basis vectors. A rotation[1] changes to a new set of basis vectors that are (a) all unit length and (b) all perpendicular to each other.

In the code, direction is assumed to be a unit vector, and that's your new z. x = up.cross(direction) is perpendicular to both up and direction, but may have arbitrary length since up and direction aren't generally perpendicular to each other; so x must also be normalized to length of one. y = z.cross(x) is perpendicular to both z and x, and doesn't need normalizing since they were already perpendicular (though extra normalizing doesn't hurt and sometimes--probably not here for most purposes--helps to clean up rounding error).

It's almost never a good idea to use Euler angles unless you're modeling a system where they have physical meaning (e.g., robot arm, gimbal, etc.). The math is almost always simpler when you work in terms of the rotation matrix directly (or quaternions, or axis-angle rotation, or pretty much anything else).

1. Or mirroring, which is the same as a rotation but with any two of the basis vectors swapped or any one negated.

It is usually used to create a matrix for the camera orientation given a) the camera position in XYZ coordinates, b) a point the camera is looking at in XYZ coordinates, and c) the direction of 'up' in the world (otherwise the camera would be able to spin freely along it's Z axis while still satisfying the 'look at point A from point B' requirement). You build up the 3 axes of the camera orientation step-by-step. The 3 axes can then be plugged directly into the transformation matrix, and voila.

When I was learning 3D graphics, I found this site an excellent resource. Here is the page with a more in-depth explanation of the 'look at' function:


Creates a matrix with one axis in a given direction. Then a second axis at a straight angle to that, based on a reference up axis. The third axis is simply perpendicular to both. Position goes into the last vector of the matrix.

How come y is z.cross(x) and not up?

Edit: I think it has been answered by others - up is in world-space, and the camera is not necessarily aligned horizontally.

Yep, the up vector is not necessarily orthogonal to the direction vector (which is also called “look at” vector in OpenGL [1]). Another approach would be to set

    y = up.reject(direction).normalized()

    a.reject(b) = a - b.project(a)
[1] https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml...

We've already defined z and x by that stage, and we need y to be a unit vector perpendicular to both. So y can only be z.cross(x) (or x.cross(z) for mirrored).

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact