Hacker News new | past | comments | ask | show | jobs | submit login
Electromagnetism using geometric algebra versus components (av8n.com)
111 points by fanf2 61 days ago | hide | past | favorite | 34 comments



Personally, an appealing part of GA is clarifying that the magnetic field is best viewed as an "oriented area", instead of a pseudo-vector [1], which is a concept that is frequently ill-defined. Usually, lecturers say that the magnetic field is essentially a vector, but to be careful that it flips directions when you mirror space. But how then can you check this by just looking at the usual three coordinates of the vector?

With GA, you associate the magnetic field with an oriented plane element. Oriented planes have this flip-mirroring property. Because space is 3D, you have exactly one vector to which this plane element is perpendicular, so you can identify the field with a vector. This is called the Hodge duality, and this particular form is accidental to 3D space.

If you need more convincing, the magnitude of the magnetic field is given by the cross-product, which calculates the area spanned by two vectors.

You could argue that you don't need GA for this, only to use the exterior product instead of the cross product. But by using an algebra that contains both vectors and oriented areas, you can effectively sum the electric and magnetic fields. This will give you pretty compact and beautiful equations, which is the essence of this article.

[1] https://en.wikipedia.org/wiki/Pseudovector


For anyone looking for a sane way to study this stuff, what finally made sense to me was (1) study discrete differential geometry from Keenan Crane's notes: https://www.cs.cmu.edu/~kmcrane/Projects/DDG/ ; (2) Read these notes on geometric algebra that formally lay down what the hell a GA space is and what operations one can perform on them: https://arxiv.org/pdf/1205.5935.pdf (Geometric Algebra: Eric Chisolm)


This is also an excellent and interactive blog-post building up to the understanding of how rotors can replace quaternions in GA: https://marctenbosch.com/quaternions/


For clicking convenience: https://www.cs.cmu.edu/~kmcrane/Projects/DDG/ (the link above is broken since HN includes the ]; in the URL).


You still don’t need geometric algebra to combine the magnetic and electric fields, if you view them as differential forms on 4-dimensional spacetime. From what I can tell, this approach is basically equivalent to what this article does. You could directly translate everything into the language of differential forms, because the only geometric products here are in fact just exterior products.


This "competition" between geometric algebra and differential forms makes me uncomfortable.

As far as I see (and I'm not an expert), they are just different ways to express very similar concepts. They are still not exactly the same, since the geometric product is not defined in DFs, and there is no hodge star operator in GA, for example, but everything you can do using one formalism in practice can also be easily done using the other one. What am I missing?


There is a fairly nuanced difference that doesn't really matter much unless you are a mathematician. Essentially in GA you'd do your differential geometry assuming a sort of ambient background space. In regular differential geometry the space of forms and vectors are abstracted and don't require a shared geometric embedding. I'm not a mathematician, so this is a very non-precise explanation, but that's how I understand it.


If I remember correctly, the Hodge star much more closely belongs to geometric algebras than it does to exterior algebras, since you need a nondegenerate bilinear form to define the Hodge star, from which you can just as easily define the geometric product, and from the geometric product the Hodge star.

To make an analogy, it sounds a bit like you're asking whether inner product spaces and vector spaces are equivalent. Every geometric algebra gives rise to a Hodge star, and an exterior algebra, and so on, but exterior algebras are a much more general concept, so they're less powerful until you tack that extra structure on.


To be honest, my question is much more practical, and much naiver too.

I work with people who use Euler angles to express rotations, and it's a horrible world. I learned quaternions in my day, and there are some obvious advantages. When I discovered GA some years ago, it was really eye opening. It makes quaternions an easy to explain concept, even intuitive, and I've used it since then, not only in my own work but also to teach other people. Then, I learned about differential forms, and it's also very interesting, I think I could base my "intuitive explanations" in this other paradigm, but I'm not sure I should. I do not think it makes a big difference in my particular case, but as I said I find this "competition" a bit frustrating, and am trying to understand it better.

I cannot discuss with a mathematician if the Hodge star is a GA or a DFs concept, but I have found it all over the place when reading about DFs, and not so in GA related material (though I have a vague idea about how the Hodge star operator can easily be defined in GAs using the pseudoscalar). But is this really my choice? I have listened opinions about which one is more general, but not really convincing arguments (at least not arguments that are obvious to me).

Thanks for your explanation. I think I need to have a deeper look at this stuff. I like to get lost in these rabbit holes, but sometimes it goes a bit over my head.


You are mistaken, the Hodge star does not belong to geometric algebra more than exterior algebra, that's the wrong way to look at it. Exterior algebra is just only a sub algebra of geometric algebra, they both have the same Hodge star. Saying the Hodge star belongs more in one than the other is a bit silly.


Look, you already need a bilinear form to get the Hodge star. My point is that with that same bilinear form you also get an entire Clifford algebra, and a much more natural definition of the Hodge star. That's all I'm saying. Is that mistaken?


You're mistaken, GA does have a Hodge star, as I've explained many times before https://grassmann.crucialflow.com/dev/algebra

The exterior product can be derived from the geometric product, so differential forms occur in geometric algebra.


You can easily define it, that's what I meant saying that you can do the same things in practice, but it's not usually defined (at least in the books and articles I've read), and certainly it is not so ubiquitous as in DFs texts. And, of course, the exterior product is contained in the geometric product. I guess that, in the same way, you could define a geometric product operator when using a DFs formulation. Would you then say that geometric algebra occurs in differential forms?

In any case, you did not attempt to answer my original question. Are GA and DFs just different ways to define "equivalent" concepts or is there some more fundamental difference that I am missing?


No, I would say differential forms occur in geometric algebra, not the other way around.


Fair enough. I have seen some comments (not in this thread, it was some time ago) that suggested that DFs allow the same as GA in practice, and everything GA does is adding an unnecessary geometric product, but exterior products should be enough (not my opinion, I can try to find the original comment if you want). I do not know enough to have an own opinion.

You obviously know more than me about this, so I will ask you a slightly different question: if I learn GA well enough and totally ignore differential forms, what will I miss?


Differential forms aren't exactly comparable to GA... I would instead look at the relation between exterior algebra and GA.

Differentials are a concept that the comes from doing calculus on manifolds, and exterior products of differentials are just used for tracking information about oriented volumes.

To answer your question (switching differentials forms for for exterior algebras), you wont miss anything, as the wedge product is part of a GA.


+1 to this exposition!


I found "Geometric Algebra for Electrical and Electronic Engineers" helpful: https://ieeexplore.ieee.org/document/6876131

ABSTRACT: In this paper, we explicate the suggested benefits of Clifford’s geometric algebra (GA) when applied to the field of electrical engineering. Engineers are always interested in keeping formulas as simple or compact as possible, and we illustrate that geometric algebra does provide such a simplified representation in many cases. We also demonstrate an additional structural check provided by GA for formulas in addition to the usual checking of physical dimensions. Naturally, there is an initial learning curve when applying a new method, but it appears to be worth the effort, as we show significantly simplified formulas, greater intuition, and improved problem solving in many cases.


Recently I made a video presentation about geometric algebra, including the formulation of Maxwell's equations in GA: https://youtu.be/7hlDRLEhc8o

Also, my implementation of geometric algebra in the Julia language, Grassmann.jl https://github.com/chakravala/Grassmann.jl

It helped me truly understand Maxwell's equations for the first time, understanding that it is not just some physical artifact but actually a natural foundational idea in pure mathematics applicable to physics.


That’s cool.

What do you make of the algebra of the dihedrons? https://youtu.be/lqH4BLHGsFw . It’s a “sister algebra” of the quaternions.


These “dihedrons” (which you might as well just call “2x2 matrices with real entries”) are isomorphic to the geometric algebra of the 2-dimensional Euclidean vector plane with signature (+, +), with 1 scalar component, 2 vector components, and 1 bivector component. Personally I find the basis 1, e₁, e₂, e₁e₂ and the notation of GA to be conceptually much clearer than the matrix entries and standard matrix sum/product to work with, but sometimes using a matrix representation is convenient in a computer.

(2x2 real matrices are also isomorphic to the geometric algebra of the 2-dimensional pseudo-Euclidean vector plane with signature (+, -), under a different interpretation.)

By comparison the quaternions are the even subalgebra of the geometric algebra of 3-dimensional Euclidean vector space with signature (+,+,+), consisting of only the 1 scalar and 3 bivector components. Or under a different interpretation are isomorphic to the full geometric algebra of the 2-dimensional vector plane with signature (-, -). They can be represented as Pauli matrices.

For more on this see the papers and books of Garret Sobczyk, https://garretstar.com/secciones/publications/publications.h... ; for example the recent https://www.garretstar.com/sobczyk09-mar-2020.pdf


Nobody ever writes Maxell's equations using "components", the title sounds like a straw man argument.

The rest of the text is well written, but hopelessly useless without a comparison with the typical way to write Maxwell's equations using differential forms (which turns out to be essentially identical to geometric algebra).


If I understand correctly, Maxwell wrote Maxwell's equations using components. That's why they took 20 equations.


Sure. My first sentence is missing the word "today".


Maxwell wrote his equations using components. Heavyside create modern vector notation.


You should check out the bivector community https://bivector.net/ dedicated to Geometric algebra.

Join the discord https://discord.gg/vGY6pPk.

Check out a demo https://observablehq.com/@enkimute/animated-orbits


If a point is the intersection of two lines, and a line is the intersection of two planes, then what structures intersect to give a plane? Two 3D spaces? And so the intersection of 2 4D spaces should give a 3D space? How do I express these ideas with GA?


A point is the intersection of two "generic" lines in an ambient 2-d space. But you can have coincident lines which intersect in a line, or parallel non-coincident lines which have no intersection. Further, in 3-d, the majority of lines are "skew", and neither parallel nor intersecting.

Similarly in an ambient 3-d space, 2 generic planes intersect to give a line. But the planes can also be coincident or parallel. And in higher ambient spaces can intersect at only a point, rather than a line, or even fail to intersect in a non-parallel way.

In 4-d space, the intersection of two generic 3-d spaces does indeed give a plane, with exactly similar caveats.

The standard GA doesn't directly represent general lines or planes, however. The elements are the equivalent of "vectors" rather than "points", and always go through 0. The obvious way to handle these are parameterizing the lines and surfaces, but you're essentially working with equations for the surfaces, and keeping track of the variables.

The slick way of handling it is with _projective_ geometric algebra, and intersections turn into "meets". The meet of two parallel lines (planes) is now a "point (line) at infinity", and of a (line, plane) with itself is the line (plane) again. Skew lines have a meet of 0 (not the point 0, the number 0).


thanks again internet. some unknown intelligence, ironically named, wnoise, is confirming my intuition. That a plane is the result of two intersecting 3d volumes in a 4d space. And 'our' 3d space (where the meat lives) is the intersection of 2 4d volumes in a 5d space. But dear Wnoise, enlighten me about ' _projective_ geometric algebra, and intersections turn into "meets".' Where can I find the book, 'Projective_ geometric algebra' for dummies?


I don't have a book recommendation handy. It's "just" a combination of two neat techniques. Get books on each of those and understand them each, and you have the combination.

GA has been well discussed here, so, the other half:

Projectivization is a fairly standard trick even for normal geometry. It's adding an additional dimension, which is in most contexts just set to 1. (Projective actually just means treating all points on a ray as equivalent; this loses the dimension you just gained

It lets rotations and translations be treated in a nearly uniform manner, and lets you do rotations around points that aren't the origin. It's used all over the place in much graphics code (usually under the name homogeneous coördinates). The last section of https://en.wikipedia.org/wiki/Homogeneous_coordinates discusses this briefly.

For online resources for the combination: another commenter has recommended https://bivector.net/ , and it looks okay, with sections specifically on 2 and 3 dimensional projective geometric algebra.

There is also the nice C++ header library klein: https://www.jeremyong.com/klein/


Total non expert here but your fascinating question got me thinking out loud.

Two 3D volumes in a 3D space obviously intersect to give another 3D volume, unless they're tangent. So at first glance I'd be tempted to say no, but...

Two overlapping polygons in a 2D space also define another 2D polygon. Two overlapping line segments in a 1D space define another line segment. You only get a reduced-dimension object at the intersection if they're intersecting in a higher dimensional space. Otherwise they can't intersect "at an angle".

So I'd say purely by following the pattern that it must be that two 3D spaces intersecting "at an angle" within a 4D space define a plane by their intersection.


That's why I ask how to formally describe this problem. It's too 'squishy' to say two 3d spaces intersecting at an angle in a 4d space. I thought perhaps I could use time to untangle this visualization-defining a solid as the result of taking a plane and sweeping it through space. So, envisioning the 'growth' of a solid from a moving plane and imagining the intersection of the growing solid with the another growing solid from a different plane.


It's not too squishy. The important thing to understand here is that a plane is an infinite object in a given subspace - just like a line is. Two infinite objects intersecting give you an infinite object in the subspaces they share, and a zero-sized object in the subspaces they don't share. The shared subspace of two planes in 2D is obviously the same plane. The shared subspace of two planes in 3D is obviously either zero (two parallel nonincident planes), a line (the intersection that they share) or a plane (two incident planes). So you get a null, a one-dimensional subspace, or a two-dimensional subspace, depending on how many shared subspaces you have between the objects. So a pair of three-dimensional infinite objects, in four-dimensional space, may either be disjoint, resulting in a null, incident, resulting in an infinite volume, or intersecting, resulting in an infinite plane.

The source of confusion here is that we're mixing up (bounded) volumetric shapes, line segments, and planar polygons, with their infinite counterparts - the lines, points, planes, and infinite volumes. An infinite object has no shape - it is infinite in some dimensions and zero in all others - this is what gives us those clear well-defined intersection objects, which are either zero or infinite. A bounded object has a shape - it has a boundary that is not infinite. When you intersect such objects within their shared subspaces (two line segments on the same line, two polygons on the same plane, two volumes in the same space) you get either a null, or an object of the same type. This is obvious when you make one of the objects infinite - an intersection of a line segment with its line gives you the same line segment. So the reason you are having difficulty imagining two somethings that will intersect in a plane is that a plane is an infinite object, so only two infinite objects can intersect in a plane. So you need two infinite volumes, that pass through the same 4D space, but are not the same volume. These volumes share a two-dimensional subspace - the area of 4D space they intersect in is a plane. It's infinite in two dimensions, and zero-sized in two others.


Ah, I was starting with bounded objects on purpose to build intuition, but I tried to be clear in my language about when I was talking about bounded vs unbounded objects. I was aware that the question was about unbounded objects. I thought it would be easier this way because it's hard to visualize intersecting 3D spaces correctly, so I wanted to show that picturing them as bounded volumes leads to the wrong conclusion by way of analogy to lower dimensional objects. I left out the null intersection case for brevity.

Thanks for the more thorough explanation!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: