Hacker News new | comments | show | ask | jobs | submit login
Let's remove Quaternions from every 3D Engine (marctenbosch.com)
452 points by Glench 44 days ago | hide | past | web | favorite | 175 comments



Tangential: Back in 2000 there was some debate about dropping quaternions in favor of a set of equivalent operations on plain matrices. https://www.gamedev.net/articles/programming/math-and-physic... The debate petered out when the proponents finished optimizing their implementation and found they had produced exactly the same code as the existing quaternion implementations. The only difference was the approach used to derive it.


Thank you for posting this, I found it both very enlightening and a pretty fun internet-drama read :D

Interesting that the original complaint of that article/thread and this one are both the same - ie. quaternions are 'too hard to understand', and people 'use them without knowing how they work'. With this being posted so soon after https://news.ycombinator.com/item?id=18310788 , I feel like that complaint is less valid now than it ever has been, but it's kind of interesting to think about why quaternions in particular draw so much fire when, honestly, I found matrices to be equally challenging at first. Is it just an education thing, since matrices are generally taught much earlier? Or is there some intuition in matrices that isn't present in quaternions? Both of them seem to be just 'do this math, then magic happens' unless you think about them a fair bit, and that was pretty challenging for both the first time round imo.


Matrices are linear transformations. Take a bunch of vectors, map them to another bunch of vectors. It's all inherently geometric.

The matrix:

  adg
  beh
  cfj
maps the vector (1,0,0) to the vector (a,b,c), (0,1,0) to (d,e,f), etc... so if you write any other vector as a weighted sum of (1,0,0), (0,1,0), etc... the weights get mapped to the new vectors.

As someone with a background in physics and geometry, it seems bizarre to me to use quaternions to reason about geometry. It's really unnatural. It's basically just a coincidence that the algebraic structure of rotations in 3d happens to look like this. It doesn't generalize to higher dimensions. There is no equivalent to quaternions for 4, 5, 6d [1].

Bivectors discussed in the article, on the other hand, easily generalize to higher dimensions, and are a completely natural geometric concept. (Of course if your thesis is about constructing 4d objects you care about that a whole lot more than if you build a 3d engine...)

Now that doesn't mean it's easier or harder to manipulate one or the other. As they all reference the same mathematical structure it's also not surprising that the code ends up being the same. And given that algebra is more natural for computers, I can see why quaternions might be useful here. But geometric they are not.

[1] https://en.wikipedia.org/wiki/Composition_algebra


There is an N-dimensional generalisation for complex numbers, quaternions and hypercomplex numbers called Clifford algebra. I'm not an expert in that field (coming from physics and optics) but apparently it's not by chance that quaternions are connected with the rotation group SO(3) and complex numbers with SO(2). You can generalize to SO(n) with Clifford algebra.


This is closely related to the exterior algebra, which is where bivectors live. But I don't know much about Clifford algebras either. I didn't know that they are more directly connected to Quaternions. Thanks!


The 'geometric product' mentioned in the article is just Clifford's product. The 'geometric algebra' is the same thing as the Clifford algebra induced by the scalar product.


Ah got it. Thanks. Feels like I should have known that.


There was an excellent article posted to HN a few years back about Clifford algebra you (and GP) might find interesting:

https://slehar.wordpress.com/2014/03/18/clifford-algebra-a-v...


I think matrices are easier for people to visualize because they can think about them as a vector frame and coordinate transformations.

Quaternions are seen as this abstract 4D space somehow containing 3D space transformations, and people have a much harder time visualizing 4D space. I do not think spending a lot of time trying to visualize this 4D space, like this video does, is the right way to go. Rotors make that method obsolete.


To be honest the hardest part is not understanding single rotations, but understanding how rotations compose.

Thinking about a scalar + bivector as the quotient of two vectors or as the composition of two reflections is easy enough. But that still doesn’t yield a solid understanding of how rotation works – 3D rotation is subtle and tricky, even for people who have pretty good spatial reasoning and a lot of experience working with solid geometry / 3D design.

I have been thinking about the best interactive diagram(s) for giving people a good intuition about that for a long time, and I have some ideas but I still haven’t implemented anything and I’m not sure how well it will work.


In high level programming, the code is for humans, the optimized compiler output is for the computer.


you're greatly overestimating how much the compiler can optimize code.


You need to be a pretty good assembler programmer in order to beat the optimizer already. Maybe in the future, with the help of "AI" the computer can do what we want, instead of what we tell it.


If you call .sort() which implements bogo-sort, any programmar will be able to write a better sorting algorithm in assembly. I doubt any "AI" will be able to restructure your high-level bogo-sort into an efficient algorithm.


I'm not especially good assembly programmer but I beat the optimizer every day, in some cases by an order of magnitude. Even best compilers suck at automatic vectorization.


But the article is about 3D graphics programming, which is often performance-sensitive.


I do love Geometric Albegra and hope it gets further adoption. In my field (physics), the project of moving to geometric algebra currently looks hopeless due to the inertia of other formalisms and lack of interest from established physicists.

The real shame is that the best opportunity to introduce physicists to geometric algebra was a hundred years ago when we were first discovering spinors. First the Pauli algebra and then the Dirac algebra. If physicsts at the time had been made aware of the geometric algebra approach to these things, I think it could have been an appropriate motivation to switch to geometric algebra. Instead, people came up with matrix representations of the Pauli and Dirac algebras and the field is stuck using those and saying "meh. Good enough."


In my personal experience physicists are usually rather pragmatic about these things. Not many will support a "project of moving to geometric algebra" for its own sake, or the sake of perceived mathematical elegance - I know I wouldn't, even though I spent some time studying it just for fun. On the other hand, find a problem where geometric algebra gets you to a useful insight in a quicker or more natural way and few will object to treating it in that language.

For most of the usecases associated with the examples you mention - Pauli and Dirac algebras - I don't think it matters very much. For instance, I haven't seen a decent QM course that doesn't devote at least a lecture or two to discussing the origin of Pauli matrices and the associated abstract algebra and group theory - as they should - but I don't think the course would be any more concise or useful (in terms of the physical content) if it then proceeded to treat QM in the language of geometric algebra. There's nothing wrong with the matrix representations (especially in a more computational context), though obviously one should know where the representation comes from and how to derive it.


> On the other hand, find a problem where geometric algebra gets you to a useful insight in a quicker or more natural way and few will object to treating it in that language.

I would argue that E&M and anything involving rotations (or Lorenzt boosts!) are two great examples where GA is miles ahead of anything other than differential forms but still has modest advantages of differential forms as well.

The main advantage I see in geometric algebra is that its a unifying framework. The same langauge used to understand E&M in GA is also useful in quantum mechanics, general relativity, complex analysis, linear algebra, etc. I'd argue that physicists don't mind this mish-mash of notations because they've already put in the work to learn them all separtely but I think there are real advantages (especially pedagogical) to unifying them into one framework.


I think that last point really needs to start with a thorough reform of basic mathematical education (starting from undergrad calculus) to have any chance of real success. The present mish-mash of notations, however disjointed, lends itself very well to teaching people who don't know much beyond basic real analysis and differential equations. You can't just throw them in a GA-based E&M class without completely changing the entire curriculum first.

Your first point got me curious, though. I found E&M in differential forms quite satisfying from an intellectual point of view (although of little practical benefit to the problems I was interested in compared to the classical approach, which I was plenty familiar with already). What advantages do you see for GA vs. differential forms in this context? This might make for a few interesting seminars with our math folks...


I should be clear here that my longing for a physics community that embraces GA should not be confused with any level of optimism that it'll actually happen. I agree its a monstrously difficult task and our current formalisms are more or less good enough. A man can dream though!

One advantage I see in the GA treatment of E&M is in radiation. If one takes the Faraday bivector for a stationary charge (ie. the Coulumb field) and then just 'rotates' it into a moving frame using the GA rotors you can find the standard radiation fields of E&M quite easily. This is cool because it's not only a neat proceedure mathematically, but its computationally quite efficient, say if you need to find the fields produced by N moving charged particles.

I'm sure there are more examples of things like this but it's been a while since I last looked at E&M in GA so none are coming to mind right now.


How do you "just 'rotate'" bivector into accelerated frame? Because if you're talking about Lorentz transformations between inertial frames, those alone can't give you radiation fields.

To get radiation fields, you need Maxwell's equations and boundary conditions. When you have those, what is the use of GA?


Yes, it seems I have oversimplified things. My apologies, it's been a little while. Instead of the prescription I described, here is what one must do:

1) Start with the Coulomb potential A(r) = qγ₀/(4πr)

2) make it covariant by replacing r -> r -> X⋅γ₀ where X = r(γ₀ + γᵣ). This is a trivial transformation in the static frame as one can easily show X⋅γ₀ == r, but it's important as it encodes the causal information you astutely pointed out is vital to obtaining radiation solutions.

This gives A(r) = qγ₀/(4π X⋅γ₀)

3) Now, one can 'rotate' this to a frame with (not necessarily constant) velocity v via A'(r) = R A(R̃ x R) R̃ ie. evaluate the rotated A at inverse rotate spacetime position. Then

A'(r) = q v/(4π X⋅v)

This is the Liénard–Wiechert potential (this can also be obtained by solving ∇²A = J using the retarded Green's function G(r, t) = δ(|r|-t)/(4π|r|).

4) Now once can get the radiation fields by taking the derivative of this and doing some algebra:

F = ∇A = (q/4π) (X ∧ v + ½ X(v̇ ∧ v)X)/(X⋅v)³

The term proportional to X ∧ v is just the Coulomb field, but the term with the X(v̇ ∧ v)X is the radiation field which one can show propagates to infinity.

This is unfortunately much less elegant and straightforward than I let on initially so I apologize.

I still think this has advantages over standard treatments but your milage may vary.


That's a neat example, thanks!


> You can't just throw them in a GA-based E&M class without completely changing the entire curriculum first.

Preparing for the additional requirements of the lectures of the coming semester is whar the semester break is for.


Nah, you really want to start students learning the GA formalism in their linear algebra and vector calculus classes, if not in high school.

Especially important in high school is to dive into not only the “complex numbers” but also the “split complex numbers”.


Sure, as long as you don't expect anyone to complete the course or are willing to play catch up for most of the semester. In other words, I'd love to plan courses that way, but no university administrator would ever let me (instead I restrict myself to graduate seminars and beyond).


How does Geometric Algebra link into the 4-vector representation of E&M?

https://mitpress.mit.edu/books/collective-electrodynamics


Can I ask you (and keldaris) what precisely you mean by geometric algebra, and how you would propose to teach it?

I'm a math professor. To me "geometric algebra" refers to any use of geometry in mathematics to illuminate algebraic principles. There are very very many of these -- this article illustrates one good example. I try to incorporate intuition from geometry into my teaching whenever possible.

It seems that the two of you are referring to some particular unifying framework, and/or set of definitions and notation. Could I ask you to elaborate on what you mean?

Thanks.



Thanks, I'll have a look!

This looks a bit too high-level to inform my undergraduate teaching, but certainly I could learn something from this.


I'm not any of the two you asked, but given that @eigenspace is a physicist, what you are looking for (notations used, etc) is possibly here:

http://geocalc.clas.asu.edu/html/Evolution.html#References

under "Recent Books". Notably "Clifford Algebra to Geometric Calculus" and "New Foundations for Classical Mechanics" by Hestenes.

PD: if you ever attempt to read any of those two, I'd advise to go to the latest (corrected) reprint.


https://en.wikipedia.org/wiki/Geometric_algebra

Less ambiguously called "Clifford algebras", though Clifford himself prefered the term geometric algebra.


Not “meh, good enough” but rather “eh, computationally equivalent, why bother reformulating something that looks like a black box but is actually a gimmick we can rely upon to give equivalent results and that is understood by everybody in the field?”


Two ways of approaching a problem can be computationally equivalent and yet one can be a clear winner. For example, take a look at Maxwell’s equations—as originally formulated there were 20 different equations, with many equations duplicated for x, y, and z. Heaviside, Gibbs, and Hertz reformulated these using vectors and reduced the number of equations to four. Computationally equivalent, but nobody uses the 20-equation version of Maxwell’s equations any more.

Now consider relativity, which was originally motivated by Maxwell’s equations. We can describe flat spacetime as a 4D space with the Minkowski metric, which we can extend into a geometric algebra Cl_1,3(R). We can then describe field and current as multivectors, and we end up with the equation (singular!)

∇ F = µ_0 c J

This has the additional property that it captures how observations of electromagnetism change under Lorentz transforms. If you think of a Lorentz transform as just a change of basis in spacetime algebra, and if you think of electricity and magnetism as together being the basis for electromagnetism, then it’s obvious that (for example) a moving observer would see a magnetic field generated by a stationary charge.

This is obvious because the basis change from a stationary to moving observer will directly correspond to a basis change from an electric field to a magnetic field. This is simplifying a bit but I find it easier to remember and reason about the geometric algebra version of many of these formulas.


Do you think nature requires such an exotic representation?


I would argue that there is nothing exotic about geometric algebra and that its a much more natural representation of nature than our current hodge-podge of traditional vectors, tensors, complex numbers, differential forms and spinors.


Hmm, the word "interpolate" does not appear in the article. One of the main applications of quaternions in games is interpolating rotations. Does this representation interpolate well?

Also, I've seen several articles trying to convince me that alternative transformation representations like these are better, but I haven't seen much code. Has anyone written a library using these concepts that could replace a traditional vector math library like https://glm.g-truc.net/ for games?


Yes, it does, because it is an identical representation to quaternions. Except it actually explains all the weirdness in quaternions without resorting to 4-space.

This is because quaternions are a subalgebra of geometric algebra in 3-space.


Yes.

You can prove, for two rotors R1 and R2, that slerp(R1, R2, t) = R1 (R1^-1 R2)^t.


there is nothing weird about 4-space, what do you mean?


Re-read what you are replying to - they didn't say 4D-space is weird but rather that quaternions are weird:

> all the weirdness in quaternions without resorting to 4-space


Yes, anything you can do on quaternions you can do on rotors. (Updated the article)

I have not seen a clean version of the code online but it is almost the same as for a quaternion.


Great post although it made me hungry for explanation what are the actual implementation differences between rotors and quaternions because I got the impression that we use exactly same real numbers to do exactly same things. We just imagine basis of this transformation space not as some 4 orthogonal vectors, just 3 orthogonal unit planes plus one real number for scaling.

Those bivectors just explain (incredibly, amazingly well) what quaternions are and why they have properties they have. It doesn't seem to change how you crunch the numbers to get what you want.

...

It's really great that you wrote that post. I feel like for the first time I truly understand rotations in 3D and know why I felt something fishy was going on with results of vector product and expressing rotations with them.


99% of the time this is true.


I’d love to see a small GitHub repo that contains Rotators and Quaternions with compared results.


The results are going to be the same. The arithmetic involved is literally identical.

The real power of the GA version is that you can more clearly geometrically explain what is going on, and you can generalize everything to pseudo-Euclidean spaces or to lower or higher dimensions. GA gives you some more algebraic tools to work with, so when you are trying to write your proofs they are clearer and more concise.


I believe you.

I work in code all day. I have a lot of code that works. I’d love to see code that also works but is easier to understand.

If someone wants to remove something from every 3d engine then I’m going to need to see what the replacement looks like. In actual code.


I don't believe him.

I got a C- in Linear Algebra in college... I remember my Linear Algebra prof saying "maybe I'm being pedantic" when explaining a concept and I was like "wtf are you even talking about?"

I avoid all Math - and I think black boxes are great. The few times I've used functions with quaternions in game programming (I'm not a game programmer) they seemed really easy to use. I'm skeptical that a concept that is easy to understand for people with a Math background is easier to use in code for people like me.


> The arithmetic involved is literally identical.

So to "remove Quaternions from every 3D Engine" all we need to do is rename the "Quaternion" datatypes to "Rotor" and we're done?


If the results are going to be the same, then what's the possible benefit for any user of a 3D engine?

Things like the ability to "generalize everything to pseudo-Euclidean spaces or to lower or higher dimensions", algebraic tools and writing proofs are not relevant to the needs that a 3D engine serves - 3D engines are written to fulfil requirements of engine users (not even engine writers), who won't be doing such things, that's below the abstraction level that's handled by the engine. So from that perspective we're comparing GA-filled black box with a quaternion-filled black box, and either (a) the GA box has better performance; or (b) the GA box has different results that are arguably more correct; or (c) the GA box has a substantially easier API; or (d) the GA box is useless.

It seems to me that you might be aiming at the (c) benefit with all the arguments that this math is easier to understand, but I'm not entirely convinced based on the article; perhaps an illustration of the expected code differences in using a non-quaternion GA-based engine (are there any?) would be helpful. If we can have an engine/API that's easier to understand for beginners and that makes all the basic tutorials simpler, then that would be a nice thing to have.


> If the results are going to be the same, then what's the possible benefit for any user of a 3D engine?

If it is going to be compiled down to the same binary code, what's the possible benefit of any higher level language?


From the possible options I listed above, (c), a substantially easier API.

On the other hand, if it's not exposed to the user, it doesn't particularly matter what structures and math that higher-level language uses behind the scenes to do what it does; most aspects of compiler theory are irrelevant for users of high-level languages.


If the people maintaining the black boxes I rely upon have an easier job keeping it functioning and correct, I would say that is very relevant to me


Don't know if the article has been updated since your post, but Interpolation and Gimbal Lock are both called out as working with Rotors in the 5th paragraph.

> The change is simple and the code remains almost the same. Anything you can do with a Quaternion, such as Interpolation and avoiding Gimbal lock, you can do on a Rotor. But the understanding grows a lot.


You can take the logarithm of a rotor in order to do interpolation. If you want a transformation represented by a rotor R to happen in N steps, you can apply R ^ (1/N) N times. The Nth root of R is exp(log(R)/N)


And if you don’t want to take the logarithm per se, you can bisect the rotation A by taking the sum of your rotor + 1, and then normalizing the result by dividing by the magnitude, (A + 1) / |A + 1|. Or if you have two rotors A and B, you can interpolate halfway between by taking (A + B) / |A + B|. This requires seven additions, four multiplications, one division, and one square root.


I would replace the last two operations by another multiplication and one reciprocal square root. There is extensive SIMD support for very fast approximations, and getting better in newer chips.


Yes of course. I just mean, conceptually that is what is required. You can replace a bunch of the multiplications and additions by FMA instructions as well.

By the way Raph, I think you might be interested in this draft paper I have been working on (well, not working on for the past two months, but anyway...)

https://beta.observablehq.com/d/e639659056145e88


Very cool! Some of the later sections are broken. The fact that this is interactive is great, maybe I'll get around to putting my stuff in this form.


The broken parts are intentionally commented out, either now redundant or not finished yet. (Observable isn’t the best for version control / etc. yet.)

There are still a bunch of diagrams to make but I got a bit stalled on the project after going on a trip (and taking care of a toddler full time).

There are still a couple of research problems to figure out. In particular how to best set the tangent and curvature at the knots. Just fitting circles through triples of points isn’t the best method.

But I think this thing should compare favorably to Spiro curves for some use cases: in particular it is pretty local, a bit more robust to pathological inputs, and a lot simpler to compute (and explain). (But of course isn’t going to be globally optimizing for some smoothness metric, and isn’t extensional.)

Edit: sorry to bystanders for a completely off-topic conversation.


Agreed. And (as was largely the point of my thesis) these things are tradeoffs, a spline that's perfect in all these ways is not possible. I played around a bit and am not feeling that they'd be great for font design (they play pretty similarly to Séquin's circle splines, which of course you cite, though I think better).

We're now pretty far afield from the subject of the superiority of Rust over C++^W^W^W tau over pi^W^W^W geometric algebra over quaternions. I'd be more than happy to continue the discussion somewhere else.


The same is true (with lower performance, I suspect) with the 3x3 matrix representation.


Matrices for rotation have the problem that you are using a 9-dimensional representation for a 3-dimensional quantity, so it is easy to make matrices that are not rotations (indeed the vast majority of the possible space of matrices consists of transformations that are not anywhere close to a pure rotation). Composing matrices leads to rounding errors. It is more complicated to invert matrices. As you say the matrix logarithm is more complicated. Etc.

If you use a scalar + bivector “rotor” representation, that is only a 4-dimensional representation, which is easy to normalize to unit magnitude.


Isn’t the inversion a trivial transpose for rotations since they’re orthogonal?


It's beautiful to see that the logarithm of a rotation matrix is a skew-symmetric matrix. The skew-symmetric matrices correspond to angular velocity, which behave like vectors. You can add and interpolate as you expect. Rotation matrices multiply. Skew-symmetric matrices add. The logarithmic and exponential map moves between the two spaces.

Just letting you know that I think it's valuable to bring up matrices into this discussion, even if they do have problems in practice with rounding errors and efficiency.


That’s because skew-symmetric matrices are the Lie algebra of the Lie group of rotation matrices (and the exponential map takes a Lie algebra to a Lie group). In physics we call them the “infinitesimal generators” of the rotation group.


I love knowing about Lie algebras and groups, but I think I disagree about the "because". The matrix exponential and logarithm just are. You can define and compute them without invoking anything about Lie groups.

I didn't know the term "infinitesimal generator". Thanks!


Indeed. And certainly since rotation matrices and quaternions are both representations of rotation, they should have a log that is isomorphic.

Wilder still is the log of a 4x4 transformation matrix has the same tangent-vector properties, giving a coordinate-system-invariant rotation and translation.


Skew symmetric matrices are isomorphic to bivectors :)


> Has anyone written a library using these concepts

Yes, the C++ library you're looking for is here http://versor.mat.ucsb.edu/


Neat, thanks!


Texts on GA seem to start out saying "we're going to replace vector analysis with something intuitive and natural and eloquent and..." and then immediately introduce the "geometric product", which is neither intuitive, eloquent, or, as far as I can tell, natural. Its properties are found via formula-wrangling, and the results you get from it seem to just magically work, rather than being intuitive.

Bivectors in general are conceptually great; it's just the geometric product which I think is conceptually flimsy. And it's surprisingly hard to compute or make sense of the geometric product in general which is why the writer of the first book cited on the OP wrote "I do not think it possible to give a quick definition of the general geometric product."[1]

As far as I can tell 95% of the usefulness of geometric algebra is the usefulness of the wedge product ∧, which is absolutely under-appreciated and appears in loads of places in disguise (for instance, the determinant of a matrix is the wedge product of all of its rows or columns together).

The last 5% of the usefulness of GA comes from using the geometric product in vector-rotation via -ava^-1, which is admittedly very useful (it's why physics and computer graphics represent rotation this way, albeit in disguised forms like Pauli matrices and quaternions.) You can express that form without the geometric product, but I haven't found a way that strikes me as very elegant.

I don't mean to condemn GA - I think what it's doing is massively important. The mathematical language for vector analysis deficient compared to what we could be using, and a lot of things are more intuitive and natural in better language. I just suspect that the, uh, ideal form of this stuff will look slightly different than GA, and might not include the geometric product at all, but will definitely include the wedge product absolutely everywhere.

(I've spent a lot of free time trying to figure this out but I don't really have a compelling result yet. I've been meaning to try blogging about it, though, since it's basically my favorite thing to study.)

[1] https://math.stackexchange.com/questions/444988/looking-for-...


The geometric product is super useful and important! (As well as much nicer to work with when doing algebraic manipulation.)

In particular, it is what lets you take products, inverses and quotients (assuming the denominator is non-null) of arbitrary vectors.

> surprisingly hard to compute

Hm? No it isn’t....

Several times in the last few years I have done several pages of complicated calculations in terms of coordinates or matrices, where the intermediate steps were basically an indecipherable mess of symbols, and then sat down, really thought about what I was doing for a while, and re-written it using GA language and ended up simplifying my work down to a few lines of simple algebraic manipulations, where I could typically then give a nice geometrical interpretation to each step.

If you want to compute the geometric product using concrete numbers in terms of a basis, then it is straightforward to write the code (or work through by hand).

What is hard is not the geometric product. What is hard is geometry! There is a lot of depth and subtlety, and it takes years to really learn your way around. That is not the fault of the language though; there is a certain amount of irreducible complexity involved.

Personally I would love to get some people together and try to write a high-school-accessible geometry / vector algebra book using GA language. It would be a few years of hard work, because figuring out the right order for the curriculum, the right types of problems to build intuition, which tools to include or leave out of an introductory book, how to best show the work for a whole collection of worked examples (ideally with some interactive computer diagrams), and so on.


It is easy to compute on vectors. It is possible, but complicated and unintuitive (I have written code to do it in the past), to compute it on multivectors.

I'm arguing in favor of most of GA's language. I just keep finding that the wedge and inner product parts are fantastic, and the geometric product part isn't. And I think the reason people keep finding GA appealing is because they didn't have the wedge product before, so having that in their conceptual toolkit fixes a lot, while having the geometric product doesn't fix much on top of it. Anyway I've studied GA a lot and I still have basically no idea what 'AB' means when both are arbitrary-grade multivectors, and as far as I can tell most sources don't even try to explain it.


I’m telling you that after a few years working with the geometric product a whole bunch, the wedge product really doesn’t cut it.

> I still have basically no idea what 'AB' means when both are arbitrary-grade multivectors

Any mathematical language (or any natural language) can express a bunch of nonsensical and useless things. What is the sine of the square root of the logarithm of the tangent of some polynomial applied to a scalar? Who knows?!

You can certainly make up nonsensical expressions using matrices, differential forms, etc.

In a concrete problem, sometimes you have to think a bit to figure out what the geometrical meaning is of a particular multivector, but if the problem came out of a physical situation, I haven’t yet found a case where I couldn’t explain it.

Your multivector might be an electromagnetic field. It might be a rotor. It might be a representation of a sphere in the conformal model, ...


To be clear, I don't think the wedge product is sufficient either; I just think it's the real reason most people end up finding GA to be appealing.

I think there's work to do to clean the whole space up. Particularly, the wedge product is a 'join' or 'union' operator (amusingly, it has the wrong symbol). The 'meet' operator is not widely known but should probably be equally prominent. I haven't figured out for myself where the other holes are yet.



As someone unable to fathom the enthusiasm for geometric algebra... at least this defines this product: It's the sum of an inner product and a wedge product. Acting on two vectors these give a scalar and a 2-form. Why is adding these different objects a good idea? Don't they have different units (like, a pure number vs meters squared)?


It's analogous to the real and imaginary parts of a complex number. Does it makes sense to add a real number and a purely imaginary number? Aren't they different kinds of things? Yes, and yes!


It would really helpful if the article had some worked examples of this arithmetic, with actual numbers, the kind with digits and decimal points in. Then, things like the structure of a geometric product would be absolutely clear.

Let me have a go. I'll use an asciified version of the symbols, with * to mean multiplication of two scalars,, to mean raising one scalar to the power of another, and _ to mean taking a component of a vector.

  ab = a.b + a^b
  
  a.b = a_x * b_x + a_y * b_y + a_z * b_z
  
  a^b = (a_x * b_y - b_x * a_y) (x^y)
      + (a_x * b_z - b_x * a_z) (x^z)
      + (a_y * b_z - b_y * a_z) (y^z)
  
  ab = (a_x * b_x + a_y * b_y + a_z * b_z)
     +             (a_x * b_y - b_x * a_y) (x^y)
     +             (a_x * b_z - b_x * a_z) (x^z)
     +             (a_y * b_z - b_y * a_z) (y^z)
So if a = (1, 2, 3) and b = (4, 5, 6):

  ab = (1 * 4 + 2 * 5 + 3 * 6)
     +         (1 * 5 - 4 * 2) (x^y)
     +         (1 * 6 - 4 * 3) (x^z)
     +         (2 * 6 - 5 * 3) (y^z)
  
     = 32 + -3 (x^y) + -6 (x^z) + -3 (y^z)
The dot product makes a scalar, the wedge product makes a bivector, and the geometric product makes a scalar plus a bivector

You will note that the scalar part is much bigger than the coefficients of the bivector part. That's because the input vectors are actually quite similar - pointing z-by-y-z, with a little bit of x. Hence, their projection onto each other is large, whereas the parallelogram they form is quite small (long and thin). The dot product measures the former, the wedge product the latter.

Have i got that right?

EDIT And to clarify this:

> For any basis vector, such as the x axis, the result [of taking the geometric product with itself] is 1

That '1' isn't the scalar number 1, it's the scalar-plus-bivector 1 + 0 (x^y) + 0 (x^z) + 0 (y^z).


Sure, you've defined a map V⊗V → R⊕T, where T means 2-forms. But I still don't see why this is useful, apart from being able to extract from it both the wedge product and the inner product which you started with.


I think it is a good idea because both components contain useful complementary information (the cosine/sine of the angle and the plane they form), and the algebra corresponds to geometric operations.

Having an algebra that corresponds to geometric operations is good because you get a high-level framework for how to manipulate quantities instead of having to do things case by case.


Thanks for that, it's a really good explanation!


> the wedge product really doesn’t cut it

What about the wedge product together with the Hodge star, ie the language that tends to be used in Riemannian geometry?


From the sounds of it, I think you'd be much more happy working with differential forms! Differential forms use only the wedge product and a duality mapping known as the hodge dual (It's the GA equivalent of multiplying by the psuedoscalar). Differnetial forms easily supply 90% of the nice things in GA, so if you're unhappy with the geomtric product, I'd say that differnetial forms are perfect.

Geometric algebra will be here waiting for when you get tired of resorting to the hodge dual to take an inner product ;)


I learned about differential forms in my physics degree, and then later discovered that the exterior algebra made a lot more sense if you introduced it separately. Basically it seems like the exterior algebra is useful in many settings, and differential-forms are just one rather confusing application of it. My current opinion is that exterior algebra should, someday, be taught first (around the time of linear algebra), and then later extended to differential forms when derivatives and manifolds get involved.

I agree that the Hodge Dual is easily the worst part of exterior algebra. But you can treat the inner product as more fundamental, via *a ∧ b = <a,b> i. Either can essentially be constructed from the other (iirc).


The Hodge star isn't so bad: It's just the generalization of the idea that you can describe a plane by its normal vector.


On it's own it's not bad, but it's troublesome because given a complicated expression with wedges and stars, it's not at all intuitive how you can simplify it without working in coordinates.


Is there a silver bullet, though? The complexity of having to mess around with epsilon tensors is still there no matter which abstraction you use to sweep it under the rug...


The way this text does the geometric product is trying to be intuitive by immediately giving a "physical" interpretation as the sum of the inner and outer products. However the physical interpretations breaks down when trying to multiply three of these together.

I think Ben Lynn's approach is actually better: https://crypto.stanford.edu/~blynn/haskell/ga.html (Part 1)

It starts off more abstract, defining the geometric product simply as string concatenation (as in "free monoid" if you're familiar with that term, which you would if you have intermediate Haskell knowledge) plus a very natural constraint. From this natural constraint one can then deduce the sum-based interpretation that the OP gives.

The natural constraint is then further generalised and justified in more detail in Part 3: https://crypto.stanford.edu/~blynn/haskell/cga.html


Yeah, I spent a long time trying to find a very clean path to the geometric product and failed so far, but I feel I am getting closer, ahah.


It's very cool that the geometric product of any orthonormal basis for R^2 gives i as a "pseudoscalar".

    Let {e1,e2} be an orthonormal basis for R^2. 
    Let i = e1e2.
    Then i^2 
    =  e1e2e1e2
    = -e1e1e2e2 (anticommutivity) 
    = -(1)(1)
    = -1.
Personally that alone is enough to justify learning more about geometric algebra.


That is true of the wedge product also, though. That's my point. The geometric product gives very little power over the wedge product, and when it does, it's massively non-intuitive how it works.


The wedge product of a bivector with itself is 0. To get your bivector to square to –1, you need to be using the geometric product.


One reason the geometric product is attractive over usual vector notation is for the same reason the usual vector notation is attractive over coordinate notations (think Maxwell Equations in terms of Ex, Ey, Ez etc [0]): instead of a verbose and highly repetitive system of equations with easy operators you can reason and calculate with a single equation, but where the operators have higher complexity, but also correspondingly more properties to exploit assuming you take the time to get familiarized with them. An example and another reason is that contary to the inner product and exterior product, the geometric product is associative, which is clearly useful!

The student who is being advised to learn integrals might similarily protest that "everything people love about integrals, could much more easily be understood by just the desirable parts: limits, of summations, of products. It is unnecessary overkill to learn the theory of integrals like int(f(x), x=a..b)+int(f(x),x=b..c)=int(f(x),x=a..c)". So yes to prove the theory of integrals you will need to understand limits, sums, products, .. but the resulting properties like the identity above are undeniably invaluable ...

Imagine being a student in an alternate history, where integrals were never defined, of course they can still derive all the results (minus the results stating things about integrals themselves) which we arrive at through our current application of integrals by means of limits of sums of products. But then every derivation that in our world would sanely use integrals would be a long verbose derivation, which they might call "limited totalizations" without reifying this concept. So they are basically rederiving the same result over and over. That's when people normally add syntactic sugar to avoid repitition. Now imagine being this student following such a course, and further imagine that before following this course you had always been somewhat of an autodidact in high school etc, so half the time you are reading your course notes and half the time you are reading books out of curriculum. One day you stumble on some "integer math" book, and then while reading you realize this is not about integers, so you reread the title and it actually reads "math of integrals". After reading on you realize that all the verbose and highly redundant notation in "limited totalization" class can be avoided. Thats what I experienced: I was reading random books about "algebraic geometry" and one of them was totally whacky and off and not algebraic geometry, then I notice the title is actually "geometric algebra". I was in my 3rd year physics. So the people who are paid to teach me are giving me shitty calculitis, most of them are simply unaware of this field, some of them are but are daunted or simply lack the time to go back through all the knowledge they have learnt and rephrase them into this language, and even if they could, it would require the whole curriculum to change in "sync" (well, with a delay of one year per generation...). It really is inertia. The number of people who have come to understand and use geometric algebra are simply fewer than the number who have come to understand and use normal linear algebra, hence there are more books on normal linear algebra. Just like the number of people who have learnt to read and write is larger than the number of people who understand linear algebra, and hence there are more fiction books, magazines...

However I believe the situation is slowly changing in the right direction, computer science didn't have compilers either for a while, sooner or later people get bored of spaghetti code...

[0] A funny anecdote is that Maxwell -the king of unification in physics- was forced by his publisher to dumb down to this coordinate notation. His original submission used Hamilton quaternions, which had also been recognized by Clifford, who had based his work off of Grassman's "geometric algebra". Clifford called the algebra "geometric algebra", but readers of Clifford started calling the subject of Clifford's work "Clifford algebra". Or something like that, I don't pedantically check history claims...


By the way, the determinant of a matrix is just the outermorphism of the associated linear transformation applied to a unit pseudoscalar.

The outermorphism of a linear transformation is a useful and convenient (but sometimes tricky) concept. It gives you not only determinants, but also the application of your linear transformation to arbitrary multivectors.


No the determinant is the isomorphism of the third latent moment of the orthonorm. Duh.



> As far as I can tell 95% of the usefulness of geometric algebra is the usefulness of the wedge product ∧, which is absolutely under-appreciated and appears in loads of places in disguise (for instance, the determinant of a matrix is the wedge product of all of its rows or columns together).

Also, it's really intuitive. At least I found it really intuitive when it was first explained to me.


In my opinion, the problem is not the geometric product, but having three products (inner, outer and geometric). I have explained GA using a single product, and I think it is much clearer.

Of course, the inner and outer products are also very useful, but I introduce them only at the last stage, as some extra notation, not to help understand new concepts.


Please do blog about this. I'd be really interested to hear your thoughts!


I use quaternions at work (AVs) and used them in physics and astronomy in school. The math he's talking about is darned close to the math done behind the scences with quaternions, it's just easier to grasp because it has visual references. Thing is, you only need to understand the math once to be able to use any of this stuff, and there's nothing wrong with taking it for granted after that. Either method is good enough.

Basicallt, quaternions are hard to understand but easy to use.


I take easy to understand and easy to use over that any day. But hey, I also was a teacher once so I'm probably biased here.


So I have a layman's question: 3blue1brown and Ben Eater did a quaternion visualization set of videos. And they said it was good because it avoided an an axis lock thing that happens when x y z rotate. He said there were maybe some errors with other models as well. So I guess the question is "does this also not have the locking axis problem?"

I think it's called gimbal lock?


Unit quaternions are isomorphic to rotors, so there’s no practical difference between the two. The difference is how you introduce the concept of 3d rotation: the “traditional” way is to introduce quaternions with i^2 = j^2 = k^2 = -1, and then do some handwaving to convince the reader that they are good for 3d rotations. The bivector formalism is more natural and needs no handwaving.


Thank you! I am really starting webgl, opengl study, and so I am hoping to start on the right footing. I was beginning those quaternion videos, but I think it will benefit me to look closely here too!


Make sure you understand the Rodriguez rotation formula first, then move onto quaternions and rotors. Whenever you become confused, just write out all the terms and you’ll get back to Rodriguez’s formula.


Thank you! Will do


For anyone looking for those videos, here's a link to them:

https://eater.net/quaternions

Each video is interactive, just pause and tweak the 3d visualization. The videos are very good!


Just in case it's not obvious to all, this "axis lock thing" you speak of is so important it could cause a plane to crash! :) (it has implications beyond computer graphics).



Oh wow! I never considered that.


This article will take more than than I have to digest right now, but it's definitely saved.

I have never really understood quaternions. All I knew is that I could use that instead of euler angles, avoid gimbal lock, and would blindly use them.


I think you just described 90% of people who use them. Normally I like having a better grasp of the underlying theory I'm applying, but somehow quaternions seem more likely to make me think, yeah I'll eventually get around to that... instead of diving right in as this article I believe better motivates.


> but somehow quaternions seem more likely to make me think, yeah I'll eventually get around to that

Exactly. And then you forget about them because...well, it's not like this stuff requires maintenance. Once you abstract all rotations and interpolations away, you just use them and forget you don't really grasp why they work.


I'd argue that this geoemtric algebra presented in the article is much more comprehensible than quaternions and also more useful!


But if the argument is "let's replace quaternions in 3D engines!" then the question is really more "Okay, so do they outperform quaternions, then??" because if you're writing the 3D engine, the underlying code only matters insofar as that it has to do what it's supposed to do. You don't need to understand why it works, just that known operations will lead to known intended results. And if you're _using_ a 3D engine, you shouldn't even care. You just want rotation to work in the least amount of time, with the fewest function calls. What happens at the engine level, hidden by those functions, is entirely irrelevant.

And I don't see a case being made that demonstrates that for the purpose of a 3D engine, this actually "makes more sense". It's just a different way to compute the exact same results. If I'm writing a game, or simulation, or 3D visualisation, I literally don't are how the engine does what I tell it to do, as long as it does it.


> But if the argument is "let's replace quaternions in 3D engines!" then the question is really more "Okay, so do they outperform quaternions, then??" because if you're writing the 3D engine, the underlying code only matters insofar as that it has to do what it's supposed to do. You don't need to understand why it works, just that known operations will lead to known intended results.

At some point, people will have to write and maintain these 3D engines. Having easier to grok maths at that point is a Good Thing.


Yeah but here's the thing: if your math skills are good enough to write and maintain a 3D engine, quaternions are not the problem. They are super easy to work with, and material that explains how to use them is readily available.

Is "what they do" unintuitive? Sure, but I couldn't care less about whether someone can conceptualise _why_ they do what they do, I care about _that_ they do what they do, and that I can trust them to do that. If someone can't understand what geometric operation an non-geometric transformation might map to "in between the start state and the end state" then that someone is focussing on the wrong thing twice:

1. quaternion maths is not geometry, it's algebra. It has some nice geometric analogies when performing specific operations, but trying to understand them _as_ geometry and expecting them to make sense across the board is ridiculous, and 2. if you feel you need to understand the "intermediary" results of a 4 dimensional algebra before you're willing to write or maintain code that uses it, maybe you're putting your foot down in the wrong spot: this is a 3D engine. It's not "easy code that we over-complicated by using quaternions", it's stupidly complex code that we simplified by working with quaternions.


It's also great for interpolation


I don't quite understand why the author opposes quaternions and geometric algebra: the geometric product is almost exactly the formula for multiplying pure imaginary quaternions (up to the real part sign).

These are just two constructions of isomorphic Clifford algebras which really are the same thing.


That's not true. The quaternions are the even subalgebra of the 3D geometric algebra. They are not isomorphic. Quaternions are isomorphic to the set of scalars and bivectors in the 3D geometric algebra but the 3D GA also has vectors and a volume form.

This is just like how the complex numbers are isomorphic to the even subalgebra of the 2D geometric algebra.

The geometric algebra is richer, more powerful, more generalizable and more coherent than the quaternion algebra, all while being easier to understand because its so physical.


I stand corrected, thanks a lot! It seems the article mostly deals with the scalar/bivector aspect of it though.


Also: stop conflating bivectors and vectors. E.g. the inverse-transpose rule for transforming normal vectors needs no explanation when you treat them correctly as bivectors, and not vectors. Same goes for torque, etc.


Is this the same Marc Ten Bosch as from Miegakure [1]? Considering the content, probably the same person.

[1] http://miegakure.com/



I am not sure if I understand how to apply this to every 3D engine. A 3D engine uses 4D matrices so that it can concatenate rotations, scaling, translations, and perspective correction into one matrix operation using matrix multiplication, i.e., it can represent any sequence of 3D operations in a single matrix.

So what is the equivalent structure I'd use with trivectors to combine arbitrary 3D transformatinos into a single operation, like every 3D engine needs?

The linked Versor library was also not very helpful -- it has the rotate operation on a vector, but I found no 'make rotation' and 'make translation' and 'make frustrum' and 'apply transformation' and 'concatenate transformation' operations.


geometric algebra proponents are the rust strike force of mathematics


Rust seems to have better (free, at least) documentation, though.


It also offers more in the way of concrete benefits.


Not sure if you are ironic and what are you referring to by "it", but in any case I was mocking both rust and GA.

In all seriousness, both rust and geometric algebra are really very cool; but their proponents are so ridiculously over-enthusiastic that it is easy to get tired of them before taking them seriously.


Quats can be represented with (x, y, z, w). Rotors require a vector, bivector and angle (vec_x, vec_y, vec_z, bivec_x, bivec_y, bivec_z, theta). If you are storing or transmitting a quaternion it consumes less space and in 3D simulations or games a quaternion is therefore advantageous. 3D file formats would explode in size if rotors were used. And, more network packet fragments would be needed to encapsulate a world state update of moving objects, npc and players. That would waste bandwidth (therefore $), cause "rubberbanding" and increase latency.


No. 3D rotors are represented exactly the same way as quaternions- 4 scalars. The operations are the same as well, only the explanation is different.


Partially true, internally libraries represent the rotor as just four scalars and can convert these to a quaternion. However, to actually make use of the rotor to do anything useful, e.g. interpolate, which is a power quaternion math has innately, you need to provide a lot more external information; plane origin vector, bivector, angle. A 3D file format storing the transformations between joints of the skeleton of a character for example would have to provide these extra bits of information in order to store the frames to perform inverse kinematics.

Likewise, if you needed to store or transmit spherical camera interpolations or non-player or player character transitions over a network this information would also have to be provided. You could perhaps do some optimization, e.g. only sometimes transmitting the origin once and then only sending the bivectors and angles in some cases which still would waste bytes and increase complexity. And, sometimes you couldn't so you'd have to send the whole thing.

But, with a quaternion you get this for free without any logical gymnastics via the previous four scalars to the next four scalars between delta frames. And, in the case of slerp only the lambda of time.

Now you can argue that a rotor could be used locally and then when storage/transmission is required you could convert to and use quaternion math to perform the necessary interpolations and thereby get the space savings. However, this article is specifically asking for the complete removal of quaternions from the field of computer science.

Unless I'm misunderstanding. Though I haven't seen a code example where rotors don't require this extra information. For example, in libvsr they have examples that require all these pieces for each frame. However, maybe that is an inefficient or naive implementation.

I did find this: http://geometry.mrao.cam.ac.uk/wp-content/uploads/2015/02/01...

Which provided a formula for rotor slerp: R(lambda) = (1.0 / sin(theta)) * (sin((1 - lambda) * theta) * R0 + sin(lambda * theta) * R1)

If that is the case, then the bare minimum information required is still more than a quaternion. We'd require the lambda, theta, and R0 (4 scalars), R1 (4 scalars).


You are misunderstanding. A quaternion literally is a scalar + bivector (“imaginary”) rotor. We are talking about the same 4 numbers.

It lives inside a geometric algebra which also includes vectors and trivectors, but those are not part of the rotor.

If you store a rotor as a general 3D multivector, it will have 8 entries 4 of which are always 0; this could be done to simplify your code (then you only need one multivector type), but is not a good idea for efficiency of computation/transmission if you need to represent large numbers of rotations and their transmission might be a bottleneck.


According to the formula I provided, to slerp for example, rotors require lambda and theta. Whereas quaternions only require the lambda. Is there an alternative I'm missing?


Which formula are you talking about? The arithmetic is literally exactly the same.

“Slerp” just means follow a path on a circle at uniform speed (i.e. “use trigonometry”). In this case, we are talking about a circle on the conceptual unit 4-sphere. It doesn’t really matter what names we call the basis elements.


I don't understand the title at all: given that there is an isomorphism, any method of distinguishing an implementation of quaternions from the proposed alternative is necessarily an arbitrary one.


I've played around a bit with geometric algebra in the past year, and I have to say that by far the best resource (albeit a bit short) was Ben Lynn's implementation in Haskell [0]. Unfortunately there isn't a well-written source that explains the methods well. There are multiple ways to do things, and I had trouble finding elegant solutions to practical problems.

[0] https://crypto.stanford.edu/~blynn/haskell/ga.html


I'm no game developer. But wouldn't it be important for game engines to use the most performant approach. Is there a difference in performance when using rotors instead of quaternions?


No difference in the actual math used, just differences in how you approach understanding of the problem.


I read through a bunch of this, and one part seems poorly explained. There’s a clear definition of the product of two vectors, but the result isn’t a vector — it’s a scalar plus a bivector. Then, by magic, three vectors get multiplied! But what does that even mean? Somehow a vector gets multiplied by a bivector, and the result is somehow still a vector.

What’s going on?


As pointed out in that aside, the general vector-bivector multiplication isn't needed to evaluate the reflection case, but I agree it's a little unsatisfying to sweep that under the rug.

You can work everything out by breaking it down to the unit basis vectors, for which the products are explicit. One way of writing it, which feels more comfortable to me possibly at the expense of being the "wrong" sort of intuition, is

u(v^w) = (u.v)w - v(u.w) + u^v^w

(using . for inner product and ^ for outer product). I guess you can also think of the first two terms as the rule for taking the inner product of a vector and a wedge product, though I haven't thought through this completely.

Because of the repeated vector in the reflection formula, that last trivector vanishes, so the result is just a vector, as it will be whenever the vector is coplanar with the bivector.


The [Aside] section of "The Reflection Formula (Geometric Product Version) "http://marctenbosch.com/quaternions/#h_13 has the answer you seek ;)


I read that a couple times. I found it unsatisfying because you are dropping terms that happen to have a coefficient of zero without a clear explanation of how those terms are even well formed in the first place.

I suppose this could be answered if you explicitly stated what set the geometric product acted on. As a guess, and from skimming Wikipedia, it’s the direct sum of scalars, vectors, bivectors, etc, up through n-vectors. So 2 + x∧y + 3x∧y∧z is a valid output. And it’s probably straightforward to show that the geometric product is actually defined on this space.

(Hi Marc!)


Ok I understand your question better, might update the article. Like the other commenter said, you can keep applying the geometric product more than once. To find what happens you can just work with the basis vectors.

So, you basically have three case: x (xx) = x (1) = x -> a vector

x (xy) = x (x.y + x^y) = x (x.y) + xxy = x (x.y) + y -> a vector

x (yz) = x (y.z + x^y) = x (y.z) + xyz -> a vector + a trivector. This only happens if the three vectors are independent, which can never be the case for -ava


It’s certainly been an intense week in the department of “explorable explanations of how to rotate stuff in 3D”, with both this and the 3blue1brown/Ben Eater collab.

https://news.ycombinator.com/item?id=18310788


When just implementing a first person type control/viewport I found it way easier to just use euler angles and the gimbal lock didn't really feel unnatural to me. So I think there is something to the idea that maybe you don't even need the complexity of quaternions at all dependent on the usecase.


First person views typically only use two axes of rotation, so of course gimbal lock will not be a problem. You are not dealing with general 3D rotations.

However, as soon as you want a 3D rotation (eg. in a flight sim) then you really need a way to represent them that doesn't suffer from gimbal lock or have any singularities.

Note: even with an FPS, you do typically want some more complex camera movement (for certain actions or animations that tilt the camera) or need to orient something relative to the camera (like the gun). Euler angles do not allow you to compose rotations, and so for this you'll need to go via an intermediate representation such as quaternions or matrices. Probably this is handled by whatever engine you are using though.


But once you are in gimbal lock, what do you do?


Any rotation around the unaligned axis will free you. The problem with gimbal lock for 1st person controls is mostly avoided because roll is limited and you don't need to smoothly rotate over the poles anyways.

But, I find that I often need smooth, arbitrary rotations for other things, so I just use quaternions everywhere. When using Euler angles as input, I immediately convert them to quaternions.


Excerpt: "It is strange because multiplying two vectors together gives the sum of two different things: a scalar and a bivector. However this is similar to how a complex number is the sum of a scalar and an "imaginary" number, so you might be used to it already. Here the bivector part corresponds to the "imaginary" part of the complex number. Except it is not "imaginary," it’s just a bivector, which we have a concrete picture of!"


Dual quaternions are even better for dealing with dynamic 3D space

http://www.chinedufn.com/dual-quaternion-shader-explained/


This can be better expressed in the geometric algebra by just considering a geometric algebra with a degenerate metric (ie. you use basis elements x̂, ŷ, ẑ and ϵ̂ where x̂² = ŷ² = ẑ² = 1 and ϵ̂² = 0). Including factors of ϵ̂ in your rotors now is equivalent to a dual quaternion.

This way, one gets the wonderful properties of geometric algebra and the useful properties of dual quaternions together.


GA is more general, however my claim is that the DQ space is the most useful one. E.g. sedenion have little use right now.

Also I believe that technically they call under Clifford algebra.


I kind of expect someone better versed in Geometric Algebra to come along and explain how it somehow also supersedes that.


GA is more general, however my claim is that the DQ space is the most useful one. E.g. sedenion have little use right now.

Also I believe that technically they call under Clifford algebra.


Clifford himself called it the geometric algebra, so many prefer to call it that, at least when applied to geometric problems.

I think that there is a lot of useful structure in the geometric algebra that gets thrown out when one moves to talking about its subalgebras (ie. quaternions, dual quaternions, vectors, etc.).

Its ultimately up to taste but I often find that I prefer to use the full GA instead of its subalgebras. Often some unexpected beauty or useful result drops out!


It's completely unnecessary to understand what the individual components of quaternions represent. All that is necessary is to build an understanding of what the operations do for you. Do you need to understand Russell's proof that 1+1=2 to compute the tip on your bar bill, or do you just need to know what to expect when you apply the addition operation to two numbers? Treat quaternions as indivisible whole values as you would floating point numbers.


I will give you an example (from my recent work) where actually knowing what the components mean was useful.

I had to calculate misorientations between many pairs of orientations. If I treated quaternions as a whole, I would find their quotient and then get the angle. But since I know what the components mean, and I know I can get the misorientation angle from just the cosine (the scalar component), and I know that I can get this value from a simple dot product, I can easily save a few operations to calculate every misorientation. And since I have to calculate many of them, this change supposed a big speedup.

Black boxes are nice when learning or thinking at a high level, but if you frequently use a black box, knowing what is inside will probably become useful at some point.


Russell's proof that 1+1=2 is more complicated than the existing intuition of adding up numbers.

GA is less complicated and more intuitive than the maths of quaternions.

So your comparison does not really hold up.


Isn't it, in the end, just a notational issue? Quaternions are unit bivectors. Up to a slightly annoying sign change, quaternions which are noted as i, j and k correspond to bivectors jk, ki and ij, don't they?

There should be a way to write a library that translates geometric algebra notations into their corresponding quaternion ones, and reciprocally.


For the most brightest mathematicians of HN : could you relate this to tensors?


Tensor algebra is fully expressible in terms of geometric algebra. If you're itnerested, I'd reccomend Geometric Algebra for Physicists by Doran and Lasenby.


The outer product is the antisymmetric part of the tensor product.


I thought Quaternions solved discontinuities in rotations.


How do rotors map to euler angles?


The same way quaternions do.


Why not use Screw Theory?


Check out this gem of a book on screw theory, geometric algebra and robotics. I keep it on my desk it's so useful.

Geometric Fundamentals of Robotics - by J.M. Selig 1996, 2005 2nd Edition



This article reads like 'I don't understand quaternions therefore they're bad.'

Like what - they are the objectively best representation of rotations...


That's not at all what the author is saying. The author actually has a much deeper understanding of quaternions than most of their users and realizes that they are part of a much more general and useful structure known as the 3D geometric algebra and he advocates using that instead of quaternions.

GA(3) use objects that are isomorphic to quaternions to represent rotations but is much more comprehensible and generalizable than quaternions.


OK sorry, that was rash of me to say.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: