The basic object that the author seems to be interested in is that of an "algebra over a field" (https://en.wikipedia.org/wiki/Algebra_over_a_field).
Specifically: Invertability of all elements with respect to the multiplication leads to the notion of division algebra and these have been studied for a long time. (https://en.wikipedia.org/wiki/Division_algebra)
When studying math at german universities, alebras are something you'll encounter in your 2nd year (latest; but might already show up in 1st year analysis albeit with a different focus). Implicitly, division algebras show up a lot when students learn about field extensions, galois theory and the algebraic closure of a field (usually 3rd semester). A more general treatment of division algebras is not a common subject, though.
See here: https://en.m.wikipedia.org/wiki/Geometric_algebra
I think it is probably not so helpful to merely think of it like a division algebra, and it is better to stay focused on the geometry. Curiously I find it easier to relate “actual” linear algebra to geometry than the thing people often call “linear algebra” that involves writing columns or rows or grids of numbers and manipulating them.
As a concrete example, consider the elements 1 + x and 1 - x. Their product is 1 + x - x - xx = 1 + x - x - 1 = 0. So certainly 1 + x doesn't have an inverse, either.
division algebras tend to be quite boring (if they are finite then they are just a finite field; if they are finite dimensional over an algebraically closed field then they are just the field itself. I guess the quaternions are an interested example in the non-algebraically closed case. but I think if you are over something other than R you're really just talking about a field extension)
clifford algebras are a sort of generalization of the exterior algebra one would have encountered in differential geometry and other spaces.
in fact it could be considered a "quantization" of the exterior algebra. as in "quantum groups". which is an entirely different part of maths. but that's not what this article is about.
I think using the language of geometric algebras / clifford algebras in physics as this article does versus the more traditional language is just a matter of taste.
So your point about division algebras is not particularly relevant to the article.
* - Frobenius's theorem classifies all the finite-dimensional associative division algebras. They are: The real numbers, the complex numbers, and the quaternions. There are no others.
Well, the author talks about what "we learned in school", not university, so that checks out but only because you two have different audiences in mind.
This is not uncommon in the UK – but it is also not common either, and depends on exactly what A-level modules you did. My understanding is that it's quite rare to do exactly this in high school in the united states – but there, I think limits are taught much more heavily. I think having a clear, short statement of having "assumed knowledge" somewhere probably helps avoid these issues.
(I thought the article was excellent, and beautifully illustrated!)
It is unexceptional (indeed, expected) to get through an American undergraduate science or engineering degree without ever taking an abstract algebra course (much less the 2+ apparently expected of German pure math students).
But in any event, the top post here by Garlef is barking up the wrong tree. Division algebras, field extensions, and galois theory (per se) are not the tools to use for studying arbitrary-dimensional geometry. What you want is Clifford algebra (which Clifford himself, and later Hestenes, call “geometric algebra”) and then geometric calculus, which can be used on arbitrary manifolds, in non-metrical contexts, etc.
Basic geometric algebra should be taught to advanced high school students and all undergraduates studying any technical subject.
Math students looking for a math-style introduction to geometric algebra should try Chisolm (2012) https://arxiv.org/abs/1205.5935
And it goes through some very advanced subjects in both. It basically prepares you to be an engineer in whatever field you choose, be it a structural engineer, or a computer scientist.
I studied mathematics in the UK and can confirm I learnt a lot about fields and other forms of linear and abstract algebras
But nobody is holding you back from visiting 'real' math lectures ;)
That's simply not true. It depends entirely on the particular institution and its roots.
There are two origins of CS in German universities: electrical engineering and maths. At universities where CS originated as a subfield of maths, undergrad CS education is very similar to a maths undergrad to the point that most of the tests/mid-terms are basically identical between CS and maths.
If on the other hand CS came from the electrical engineering department, the focus is significantly less on maths and the lectures are very different indeed.
So you'd have to look at the history of each university and where the CS department originated to find out.
Ah, I see, thanks for the information.
Which university would be an example for the latter?
Mathematics (I started a PhD but left academia in favour of starting a company). My focus was on category theory / algebra.
>In this post we will re-invent a form of math that is far superior to the one you learned in school. The ideas herein are nothing short of revolutionary.
> I firmly believe that in 100 years, Geometric Algebra will be the dominant way of introducing students to mathematical physics. In the same way that Newton's notation for Calculus is no longer the dominant one, or that Maxwell's actual equations for Electromagnetism have been replaced by Heaviside's, textbooks will change because a better system has come along.
These claims are wrong.
There are three standard notation methods in physics: vectors, tensors, and differential forms. Geometric algebra is, as the article points out, a more powerful version of the usual vector notation. But it is deficient in various ways when compared to tensor notation (for calculations) and differential forms (e.g. if you want to work basis-free). [I'm oversimplifying a bit, but a full discussion is too long for a comment here.]
Anyway, geometric algebra is not some esoteric secret. People know about it and have decided not to teach it, because the stuff that's already taught is better.
[I picked up this specific phrasing from another user here, knzhou, which I think is a particularly good way of explaining it.]
The other thing is that I hit a wall when trying to read theory papers, because nobody ever explained what a wedge operator or a two-form and so on is. I was able to mechanically follow calculations, but never got a real good intuition.
We learned math together with the mathematicians in a very axiomatic way. Without a geometric intuition, I had a hard time understanding why you need dual vectors like one-forms or covariant vectors. They seem just like a convoluted way to write the scalar product.
Disclaimer: I haven't read it (but I have heard good things).
But the lessons learned from there are largely orthogonal to the lessons you will learn from working with GA for a while.
To the grandparent, I would also recommend the very pretty book http://matrixeditions.com/5thUnifiedApproach.html
> the stuff that's already taught is better
These two statements seem contradictory.
> But it is deficient in various ways when compared to tensor notation (for calculations) and differential forms (e.g. if you want to work basis-free)
The author made no claims about tensor notation or differential forms; perhaps those might displace all vector-like notation (GA or otherwise) in 100 years, in which case the author's claim can be weakened to "in 100 years, GA will be the dominant form of vector notation".
As for "the stuff that's already taught is better", notice that such stuff includes:
- Complex numbers
- Matrix algebra
- Dirac notation
You seem to agree that GA is "more powerful" than the Gibbs-style vector algebra normally taught. The article is arguing that GA is also a simpler and more consistent approach, which I agree with (complex numbers are certainly simpler on their own, but are a little redundant if we're using GA for the rest).
From my own experience in formal education (UK high school and undergraduate physics), I never encountered tensors or differential forms. I've since learned a little about tensors (for general relativity), but that's been due to my own curiosity; I've learned a little about GA for the same reason. I never used quaternions (hence why I left them out of the above list), although I'm aware of them and that they're used e.g. in computer graphics. I used vectors, pseudovectors, and matrices a lot, and I'm certain those topics would have been easier to learn and comprehend if they'd used GA instead.
Grassman’s exterior algebra -> Differential forms
Cartan’s Exterior Derivatives -> Differential forms
Quaternions -> Clifford Algebra
exterior algebra -> Clifford Algebra -> geometric algebra
Differential forms are a particular kind of tensor and tensors can be defined in terms of multilinear maps. As spekcular says, the standard curriculum covers differential forms, tensors and vectors. This entails becoming familiar with multivectors, the wedge product and multilinear algebra, making geometric algebra a relatively small delta to pick up.
On the other hand, the standard course will also prepare you for mathematical topics like lie derivatives, differential geometry and de Rham cohomology.
Other than physics, the standard approach equips you with the mathematical machinery underlying many topics in machine learning and statistics like Hamiltonian monte carlo, automatic differentiation, information geometry and geometric deep learning.
The central advantage of geometric algebra over the standard approach isn't that it's better or more general, it's that pedagogical material for it is generally leagues and magnitudes better than those for the standard course.
Could you be a bit more specific about which "standard curriculum"/"standard approach" you're talking about?
For example, in my formal education (high school; masters with physics major, comp. sci. minor; 4 years of a comp. sci. PhD (abandoned)), I did not encounter differential forms, tensors, multivectors, the wedge product or multilinear algebra (or quaternions, lie derivatives, differential geometry, (co)homology, etc.).
Maybe you're talking about a "standard approach" for a pure mathematics curriculum, or perhaps physics/math grad school?
All I can say is that high school and undergraduate physics (in the UK, circa the late naughties) (a) does not standardise on those topics, (b) is filled with tricky operations which are easy to mix up or perform the wrong way around (e.g. cross products, matrix multiplication, pseudovectors), and (c) many of those annoyances would simplify-away under GA.
It's a cliche that physicists (certainly when teaching) cherry-pick the parts of mathematics they find useful. All of those concepts would certainly be useful in a physics course, but would perhaps be too much to fit in; yet there's certainly enough scope to cherry-pick GA (since we can drop Gibbs-style vector algebra to make room). Perhaps something else, like differential forms, might be even better; I honestly don't know (maybe I'll do some reading about it).
 By "Gibbs-style" I mean the 'cross product and dot product ought to be enough for anyone' approach that permeated my undergraduate learning.
I agree with you that pseudovectors, cross products and vector calculus are a terribly adhoc way to teach this stuff but a course covering linear algebra with differential forms elegantly unifies, corrects and generalizes them. Standard is also in contrast to the geometric algebra/calculus alternate path.
If you can’t invert vectors, you aren’t studying vector calculus properly. ;-)
Differential forms are a half-baked formalism.
Unfortunately I don’t know of any great undergraduate level geometric calculus textbooks. Ideally there would be something like Hubbard & Hubbard’s book (http://matrixeditions.com/5thUnifiedApproach.html) written using GA as a formalism.
Hestenes & Sobczyk’s book (http://geocalc.clas.asu.edu/html/CA_to_GC.html) is a hard slog, and not appropriate for an undergraduate audience.
I can't emphasize enough how wrong this is. It's the standard formalism in research-level physics and math for good reasons.
Either way of going about what is in essence the same material entails becoming familiar with multivectors, the wedge product, and multilinear algebra, whether you do it through geometric algebra or the standard approach.
Your UK undergraduate physics must have been a bit different to mine. About a third of my physics course was taught by the maths dept, and tensors/algebras were very much a part of that.
I recall, after freshers week, the dean getting everyone together. He said two things:
- Hopefully you all had a great fresher's week, now it's down to business, and
- Make sure you have fun at college.
He also had a projection on the overhead saying "If you can't blind them with science, baffle them with bullshit". I'm reasonably certain the second statement above was the latter, because...
He then casually mentioned a "maths refresher" 2 week course that all freshers had to take before "the real stuff" started. That "maths refresher" was the entire Further Maths 'A' level syllabus. In two weeks. Those of us who had done Further Maths at school were fine. Those that hadn't were shell-shocked.
Heh, that reminds me of my first physics course in an under-graduate computing degree (in Romania). The curriculum was so well designed overall that this Physics course needed linear algebra concepts that would be taught halfway-through the semester in Algebra, integration along a surface and similar that would be taught at a similar in the Calculus course, and some Statistics I don't remember that would be only be taught in the second semester.
The prof's solution? He taught a 3-hour course covering all of the above, and considered that good enough for all future courses. This particular Physics course later went on to cover analytical mechanics (generalized coordinates, Lagrangians, Hamiltonians), electricity, general relativity, statistical thermodynamics, and quantum mechanics, all in a single semester.
Needless to say, 99% were happy they passed and couldn't tell you a single thing about any of these subjects a few minutes after the final exam.
Heh, I recall managing to coast for a short time thanks to having done AS Further Maths.
The Further Maths syllabus was quite modular, and the modules our teachers picked had some discrete math (sorting algorithms, Dijkstra's algorithm, bridges of königsberg, etc. which was useful for comp. sci.), and some which complemented the regular maths course (complex numbers and more calculus, which was certainly useful for physics).
That meant you could take AO (a halfway house between O and A) when everyone else was taking their normal O levels. The thing is that the extra stuff in AO was all Pure Maths, and formed a fair amount of the easier "P1" maths syllabus for the normal A level maths exam, which had P1 and Me1 (Maths with mechanics 1, basically statics).
Because you'd done that work already prior to the A level years, you could take "A level maths" after only 1 year (which looked really good on UCCA applications :), so you've now done an exam consisting of the two 'P1' and 'Me1' papers in the first year of your A levels.
Which meant that in the second year of your 'A' levels, you could do 'Pure Maths' (P1, P2) and 'Further Maths' (Me1, Me2) for a total of 3 maths A levels.
On top of that, you had your other two subjects (mine were Physics and Chemistry), and because it was the JMB board, everyone got to do "General Studies".
Getting all of them gave you 6 A levels, even though some of the work was duplicated in the maths arena (over different years of course :)
S levels were a bonus on top - there was no fudging for those, though, you just took what you thought would be useful to study. They gave me maths and physics because I'd said I was going to do physics at college... :)
There is nothing basis-dependent in Geometric Algebra. This presentation started from a basis, but then again so do many presentations of differential forms, leading to 2-forms like dx \wedge dy and so on.
The actual difference is that Geometric Algebra requires a choice of inner product (actually, you can get away with any bilinear form), while differential forms do not. However, some of the important operations on differential forms in physics do require an inner product (e.g. the hodge star operator and the codifferential), so you end up back on equal footing with GA again.
My skepticism of the supposedly superior pedagogy of Geometric Algebra is the following:
- 3D vector algebra with the cross product operation and the dot product operation is fairly easy and intuitive. Its replacement by GA might not be so easy. So maybe GA should be introduced after the vector formalism.
- An arbitrary element of a Geometric Algebra might not have a geometric meaning. For instance, some elements of a GA are vectors, while some are scalars, but there are also these exotic mixed quantities which are scalars plus vectors. This is pretty hard for me to understand intuitively.
- An arbitrary matrix has a geometric meaning. It's essentially just a linear transformation. By contrast, I don't feel that an arbitrary element of a geometric algebra has a geometric meaning.
- Consider those elements of a Geometric Algebra which represent rotations -- they are called rotors. Observe that if "z" is a rotor then the element "-z" is also a rotor which stands for the same rotation as "z". So there is more to a rotor than whatever rotation it describes. This seems very unintuitive and advanced. (I know that this behaviour has applications for the study of spin-1/2 particles in quantum physics).
I also have trouble understanding where the rule for multiplying two elements of a geometric algebra comes from. It's an operation, introduced from seemingly nowhere, which happens to have some applications in some areas. But I'm not comfortable with a multiplication rule being introduced out of nowhere without being derived out of something. The claim that it has a consistent geometric meaning from which it can be derived is never justified. My criticisms are therefore largely pedagogical.
Further, teaching differential forms prepares my students to engage with the (vast majority of the) existing math and physics literature. Teaching geometric algebra doesn't.
The practical reasons boil down to: I have to teach the standard stuff because otherwise they can't read the literature. Having done that, what's the marginal benefit of teaching GA? Not a lot.
I have no opinion about the claims, but I loved the article, as I quickly saw the gains from this algebra for my very basic needs.
After checking out your 2 suggested alternatives, I'm not so convinced they are easier to understand.
That's not just a gross oversimplification, this is also flat out wrong if what you meant was that it only has vectors. It has more general objects called multivectors through pretty much the same process you get one, two, etc. forms from the wedge product.
In fact, both GA and differential forms build from the exterior algebra, and you can go from the former to the latter through geometric calculus (one key difference e.g. would be the method of reciprocal bases to compute inner products with non-orthonormal bases, rather than explicitly working out a basis and then its dual). So I'm confused about your remark regarding its alleged deficiency vs. differential forms if you pretty much reconstruct it within the GA/GC system (especially regarding working basis-free).
With regards to tensor notation in terms of calculations, if you mean all that index gymnastics, well GC still openly provides that way of computing things out from what you're used to.
What I like about geometric algebra/geometric calculus is precisely the way in which it's nothing new: it's putting everything people use in one system by clarifying the connections between these seemingly disparate systems. Even lie groups/lie algebras can be constructed rather efficiently in the algebra.
Another appealing feature of GA is its ability to make pretty transparent an old theorem from Cartan and Dieudonne that says you can view geometric transformations like rotations, and even translations (in projective geometry) as compositions of reflections.
There's other appealing features like this in terms of classifying and relating different geometries together that harken back to the Erlangen program, but my point is even in terms of concrete calculations, it's not quite right to say it's just a "more powerful version of the usual vector notation" as it includes more general objects than vectors, and still includes a lot of very similar ways of doing calculations (almost a kind of "backwards-compatibility?") you're used to with tensor index calculations, just with the added bonus of making the transition to the tensors used from vector calculus seamless, alongside other added relation to other systems made more transparent.
Suppose I want to teach first-semester mechanics. I can get through this fine with the usual vector notation. Vectors and dot products are intuitive when taught well (the latter just being projections), and while cross products are a little hairy, they don't play a major role in the course. There's no time for GA, and it would confuse more than illuminate in any case.
Next, I want to teach E&M. Here, I'd probably lead with the usual vector calculus notation (because even if it's ugly, it's standard and students should know it), and then follow with an explanation in terms of differential forms. [I assume this is a more theoretical, or honors, class; I might stick with vector calculus if it's more computational.] So now students know differential forms, they can do everything in a coordinate-free way and on manifolds, and they can access a significant amount of standard physics and mathematics literature.
Having proceeded in this way, what does introducing GA do except suck up a lot of class time? To me, it seems clunky and without any distinctive advantages.
Another question to think about: if this notation system is so good, why don't working mathematicians or physicists actually use it? For example, people thought Feynman diagrams were strange at first, but they proved their value and consequently caught on.
Again, my argument is that this is not some revolutionary esoteric knowledge, it's well-understood stuff that people don't teach for good reasons.
If you need to teach undergraduate mechanics, I highly recommend you at least read some of Hestenes’ New Foundations for Classical Mechanics http://geocalc.clas.asu.edu/html/NFCM.html
> without any distinctive advantages
The most basic distinctive advantage is that you can invert vectors (which is incredibly useful!!) without needing to pretend that vectors are matrices, complex numbers, or some other kind of object.
GA takes most of the advantages of complex numbers vs. R² for representing plane geometry, but extends them to arbitrary dimension, and extends them further (when using complex numbers for plane geometry you end up representing vector–vector products via the obscure z̄w product involving complex conjugation, and it is easy to get confused about the difference between a vector vs. a scalar+bivector).
But there are a wide variety of other powerful (and geometrically interpretable) algebraic identities which can be applied to vectors, blades, and multivectors, ranging from awkward to impossible to express using the language of differential forms, Gibbs-style vectors, etc. Physicists often end up resorting to tedious coordinate-by-coordinate calculations for stuff that would end up being an easy vector expression in GA. Learning these identities and how to apply them takes years and a lot of practice solving problems using GA.
My own experience for the first few years of knowing that GA existed but not being too fluent with it was that I would work some problem (mostly 2–3 dimensional geometry problems) out in coordinates, spending like 2 pages of scratch paper for the opaque intermediate calculations, with high chance for mistakes, then eventually find that most of the ugly bits along the way canceled and yielded a nice result. Then I would think a bit more about the problem, skim through a list of GA identities, and find I could have shortened that 2 pages of work to 3 lines, each of which had an obvious geometric interpretation.
I have seen examples proposed before and been distinctly unimpressed. Any serious simplifications in solutions are usually due to some notation-agnostic insight.
* * *
As a relatively recent personal example I spent a few months (in bits and pieces) working out a bunch of metrical spherical geometry for myself without reference to past work, with points represented as displacement vectors to stereographically projected points at https://observablehq.com/@jrus/planisphere with the eventual goal of implementing computational geometry / cartography code using that as a canonical representation, which I think is superior to representations used currently in practical software.
The same spherical relationships are comparable (some things slightly slightly easier, some slightly trickier) to represent as displacement vectors in an embedded sphere. But there again relationships are clearer to express in GA terms.
Most of the material there is stated without proof (maybe eventually full proofs should be included), but several of the identities there I worked out very tediously with pages of scratch work in coordinates, then realized afterward the same results could be arrived at with only a few lines of GA.
Only a bit of the material is truly novel (after doing the work myself, I hunted around for sources and found some of the same formulas worked out previously using classical spherical trigonometry 200+ years ago), and e.g. some very similar material where the stereographically projected points are represented as complex numbers can be found at http://fer3.com/arc/img/110279.applications%20of%20complex%2...
In theory most of the rest could be also worked out using complex numbers or matrices, but (a) some ideas end up awkward and unidiomatic there so you would never think to do it, so that many identities that are slightly obscure in GA are almost unheard of written in other formalisms, (b) the algebraic manipulation is at least 2–3x more cumbersome.
1) Look at this slick solution using geometric algebra.
2) Look at how ugly the trigonometric solution is. GA is so great!
3) And, by the way, one can mechanically translate the GA solution into the usual vector notation.
Point 3 is even a bit understated: GA concepts are really only used for a few lines under "Solving for the Earth's radius." Once you hit the equation for epilson^2, it's just standard algebra and trig.
Anyway, the relevant comparison is not between trig and the GA solution; it's between GA and the usual vector language. It's really the same method in different notation. You take the cross product and separate into parallel and perpendicular components, and then you reach the epsilon^2 equation, and it's the same from there.
Also, I think the author is far too hard on the trigonometric solution. The vector solution is somewhat clever, and it for any given problem that gets placed in front of me, it's not obvious a slick solution exists. On the other hand, the philosophy of trigonometry is that given a completely determined problem about triangles, you can just trig-bash mechanically to get an answer (and here you can even before starting that small-angle approximations will make life easier, so trig is even more attractive). It's really not that bad here. Especially, the comment about it being tricky because one must find a "non-trivial relationship between the four angles" is puzzling. Anyone who's spent time with geometry problems like this knows that the first step is to angle-chase and write in all the value and relations, from which this falls out immediately (and again, totally mechanically). Then you just turn the algebra crank and win.
[I don't have time to think about the spherical geometry stuff. Sorry!]
Bivectors and cross-product aren't just different notation for the same thing if that's what you meant. They're distinct (but very much so related) mathematical structures. For one thing, one's associative while cross products break associativity.
As far as GA sharing a lot with the more comment vector algebra/calc methods. Personally, I'm happy that GA has an attitude of "if it's not broke, don't fix it". It also means there's really not a lot of time lost in the transition due to the compatibility. Hell it's even backwards compatible in the sense that you can still easily retrieve your axial vectors the cross product gave you if you so wish (which cleared up instantly what the exterior algebra folks were doing with their hodge star business when I decided I wanted to explore that perspective later on).
This is the big reason I like it. I remember learning bits and pieces of linear algebra whose rules seemed so entirely random (vectors vs pseudo-vectors, curl, quaternions, some spin calculations I can't even remember anymore, etc), that turned out to have more unified geometric interpretations once I learned GA. It definitely slowed me down as a student who hated 'just memorize it' pedagogy.
Generally teachers don’t (can’t) individually decide this. Decisions about what to teach have incredible historical inertia, and are largely decided based on what the teacher learned when they themself went to school decades ago, what everyone else is teaching, what materials are easily available, what notations are used in past literature, etc. Substantial transitions in the teaching of existing material take generations.
In 2020 our basic math/science curriculum and pedagogy in high schools and universities has all been pretty well statically fixed for 50+ years (many parts are unchanged in 200+ years), except in computer science where some of the basic ideas are newer than that, and in graduate-level courses that get closer to the cutting edge.
The place where geometric algebra has seen most rapid adoption is in computer programming, where code actually has to work, and a more effective formalism makes correct code easier to write and reason about, saving a ton of time and effort even for basic examples.
Even in physics, where a better formalism leads to improved physical intuition and deeper conceptual understanding, a transition is an uphill struggle, because symbolic fluency with geometric algebra takes years of practice.
(Some) pure mathematicians on their high horses scoff at anything that doesn’t advance their own obscure abstract research, which is unconcerned with conceptual obstacles faced by undergraduate students, scientists, or engineers. They can hand-wave a better formalism away with “this is isomorphic to X and Y other structures, so there’s no value in it”.
> In 2020 our basic math/science curriculum and pedagogy in high schools and universities has all been pretty well statically fixed for 50+ years (many parts are unchanged in 200+ years), except in computer science where some of the basic ideas are newer than that, and in graduate-level courses that get closer to the cutting edge.
This is wildly incorrect. Even in the past ~20 years we've seen a sea change in our understanding of science pedagogy. Look up the work of Carl Wieman on active learning, or https://www.pnas.org/content/111/23/8410. Inclusive classroom practices are another thing that's come into fashion in the last ~10 years. The curriculum has also evolved; the most obvious thing to point to is the new emphasis on connections to data science in math/stats courses.
If you're someone who doesn't stay up-to-date on pedagogy, then yes, it takes your retirement to bring about a change. But a lot of people, especially those teaching at small liberal acts colleges, have continually evolving teaching practices. There are entire conferences where people get together to talk about college teaching.
> Even in physics, where a better formalism leads to improved physical intuition and deeper conceptual understanding, a transition is an uphill struggle, because symbolic fluency with geometric algebra takes years of practice.
Do you really believe this? To anyone to recognizes that it's the standard stuff in (a clunkier) disguise, it shouldn't take years.
Appealing to these two frictions does not offer a convincing theory of why GA has not been adopted despite being around for, what, 50+ years? The fact that it's worse than existing notation does.
I'd suggest that your comment that math and science pedagogy have been static for the last 50+ years reveals that you are the one speculating beyond their experience.
One of my hobbies is skimming old math textbooks; Lacroix’s textbook from about 1800 is not essentially different in structure or content than a typical 2020 intro calculus textbook for undergraduates or high school students, or almost any book in between. Way less radical or era-appropriate than something like http://www.math.smith.edu/~callahan/intromine.html
If you hunt you can find teachers trying new ideas (and you could also find teachers trying non-mainstream pedagogy 20, 40, or 60 years ago), but it takes generations for ideas to turn over.
If you are interested in better introductory physics pedagogy in particular, David Hestenes (in other work, the chief promoter of GA for decades) is a real pioneer and a huge influence on e.g. Eric Mazur. http://geocalc.clas.asu.edu/html/Modeling.html https://mazur.harvard.edu/files/mazur/files/rep_557.pdf
For context, I checked your profile to see where you did your undergraduate degree. I am familiar with the way calculus is currently taught at that university, and it looks quite similar to the "radical [...] era-appropriate" textbook that you linked (at least based on a quick read of a few chapters). Those courses are also taught in a quasi-active learning style (though nothing as extreme as a flipped classroom, etc.). Your observations may have been accurate 15 years ago, but that's thankfully no longer case. There's also pressure from the department/admin to make these changes in upper-level courses. See e.g. https://people.math.harvard.edu/~community/inclusive-classro... or materials from https://bokcenter.harvard.edu/active-learning.
In poking around I am also glad to see they switched from Griffiths’s to Townsend’s book for intro QM. Much more conceptually clear with less focus on mindless computation. (Disclaimer: I went to high school with Townsend’s daughter.)
I wonder if anything similar can be done for the undergrad electrodynamics course, which was more or less an experiment of “how many gnarly multiple integrals can you grind before burning out?”
The classical field theory course was one of my favorites at the master level. Classical EM is beautiful in the sense that by sprinkling some math magic you can basically calculate everything from a few basic laws. Everything sort of fits together in a coherent tight package. TBH, the class did have a fearsome reputation for being math-heavy, and many of my class mates struggled (which was weird, because I was never super-strong in math compared to many of them).
Oh right, because it's a natural language for talking about geometry and mechanics, by far the most common and important type of reasoning that the average student will need to do. And geometric algebra is demonstrably superior for that domain. So your comment is pointless.
As to vectors, obviously they have inverses, additive inverses. Since vectors don't have multiplication, there is no multiplicative inverse, but if you define new operations on them, well then that operation can have an inverse but that is not really "the inverse of a vector" anymore.
Heh, this made me chuckle and this is reason why I read HN comments. I can't help but to picture someone ringing your doorbell early on an sunday and saying: "Hi! Have you considered inverting vectors?" slams door "Who was it?" "It was just another of those damned geometric algebraists"
It is an absolute necessity for general relativity.
I mean specifically Geometric Algebra.
It sort of seems like a notation, but it has almost a cult like following and perhaps it's more than a notation, is it a theory, a branch of mathematics?
Additionally the GA formalism enables closed form solutions for the exponential map for all bivector algebras. (see 'Graded symmetry Groups' https://www.researchgate.net/publication/353116859_Graded_Sy...).
A vector is pretty much by definition also a matrix, and there is a standard way to multiply matrices. You can define several inverses of a vector that way, though you can't define a unique inverse.
The standard inner product is of course also an exceptionally typical way to multiply vectors, but the concept of an inverse there doesn't make much sense.
There are vector spaces of functions, with infinite dimension, but there are also vector spaces with a finite number of elements.
So only some vectors can even be written as 1xN matrices, if that is what you're referring to. But even if you write a vector that way, it doesn't mean it IS a matrix or that it automatically "has" multiplication.
In mathematics, an object only has an operation if it's part of the definition, and as such, vectors don't "have" multiplication.
The set of elements defined by certain properties of their addition and of their multiplication with the elements belonging to a set of scalars is named "vector space" by some and "linear space" by others.
According to the etymology of the word vector, "linear space" would be more appropriate. You have used "vector" with the meaning "element of a linear space", and what you have said is correct, except that for any "vector" as an element of a linear space, considered as a column vector, there exists a corresponding row vector, even in the infinite-dimensional case.
"Vector" means translation of the space, and this is what "vector" meant when the word was introduced by Hamilton. While the set of translations is a linear space a.k.a. a vector space in the generalized sense, the set of translations, i.e. vectors in the strict sense, has additional properties due to the multiplication operations that must be defined for "vectors" in their strict sense (which are needed e.g. to determine the angles between translations and the distances).
"Vectors" as elements of linear spaces are a very general notion, which appears in many domains, and for all linear spaces, including for those infinite-dimensional, you can define matrices, i.e. linear functions, and matrix multiplication, i.e. composition of linear functions, and also the correspondence between a 1xN vector and a Nx1 vector, more correctly between a vector and an associated linear form. The latter also exists for the infinite-dimensional case, even if it is less likely to use names like row vectors and column vectors (though the names bra vectors and ket vectors are still in use for the infinite-dimensional case).
For the infinite-dimensional case the vectors and the matrices become functions of 1 or of 2 parameters and the sums from the formulas of matrix multiplication become integrals.
While for most computer applications, "vectors" refer just to elements of linear spaces, most "vectors" used in models of physical systems are vectors in the original sense of the word, where not only the vector addition and the product with scalars matter, but the products of vectors also have an essential role and their meaning can be best understood in the context of the complete geometric algebra theory.
A standard way to multiply a MxN with a NxK matrix, but none for a 1xN with a 1xN or a Nx1 with a Nx1 matrix - the two possible ways to describe a vector.
You have to transpose exactly one of the two vectors.
And then you have two possible results, 1xN multiplied with Nx1 yields a scalar (that's actually the 'usual' dot-product/scalar product/whatever you call it) and Nx1 multiplied with 1xN, where the result is a NxN matrix.
Though thinking about it more, it seems like the outer-product-inverse of a vector (a) must be unique if it exists; and (b) is highly unlikely to exist.
> 1xN multiplied with Nx1 yields a scalar (that's actually the 'usual' dot-product/scalar product/whatever you call it)
I'm aware of this, but there are two ways we might conceive of an "inverse":
- Since a vector is a matrix, the inverse of a vector might be defined by matrix multiplication, where A is the inverse of B if AB is "the" identity matrix. This is only strictly defined for square matrices, but the pseudoinverse concept extends it to nonsquare matrices.
- Or, we could go for a more basic sense of "multiplicative inverse", where the concept is that if AB = C, then B = A⁻¹C. This is what I was thinking of when saying that the concept of an inverse doesn't make sense when multiplication is the inner product - if I give you a vector v, and its inner product with some other vector u, there is no way of recovering what u was.
Not all vector spaces are equipped with an inner product. The point is that you can start with some simple axioms and build these more complicated things (inner product spaces, algebras over a field, geometric algebras, etc.).
Any vector space over a field (usually part of the definition of a vector space) is equipped with the standard inner product, because multiplication and addition are part of the definition of a field.
What you probably mean is that “you can always define an inner product” but that’s a very different statement.
That’s not true either though, the different dimensions in a vector space do bot have to belong to the same field, so you can’t assume you can add them together.
Well, no, not at all.
The inner product is still there. It's still an inner product. The space in which your vectors exist is still an inner product space. You may not care about the inner product, but it doesn't cease to exist when you stop looking at it.
Setting aside the subtler point of what it means to "have" something in mathematics:
clearly only some vector spaces even have the potential to introduce an inner product. Consider F for some random finite field. You can make a vector space from it, but what would the inner product be? Or R x F for that matter, you could never give that an inner product.
That's why the concepts of "vector space" and "inner product space" are separate concepts. Some vector spaces aren't, and could never be, inner product spaces.
Basically the extra stuff that is usually skipped in first year linear algebra courses are the symmetric and asymmetric (often called exterior) products. These form algebras, of course. The exterior product, or wedge product, has a natural interpretation in terms of signed areas (or volumes) and from this you get the determinant as a volume form.
These are the natural generalizations of dot products (inner products) and wedge products (exterior products).
You can take a vector and associate it with a 1 form (asymmetric algebra or exterior algebra), and then multiply two vectors to get a 2 form using the standard wedge product, etc. In dimension 3, the space of 2 forms is dual to the space of 1 forms and so you can "multiply" two vectors to get a third vector. That is all that's going on here.
Actually a good multi-variable calculus class will cover most of this stuff as you need some motivation for Jacobian volume forms used to calculate areas and volumes under change of basis, and dot/wedge products are useful for generalizations of the Gauss divergence theorem and the generalized fundamental theorem that says the integral over a function, f, on the n-1 dimensional boundary of a shape is the differential of the integral of the shape.
Moreover any class on Riemannian geometry will give you all the linear algebra you need as well.
One thing I would caution students with is that by using somewhat non-standard jargon they may not understand how to generalize this stuff to n-dimensions, nor will the connections between, say, determinants and wedge-forms be clear, or dot products and angles be fully understood if only the n=3 cases is emphasized. Only in n=3 can you multiply two vectors to get a vector. But fun fact: in dimension 3k you can multiply two k-forms to get a third k-form (as the space of 2k forms is dual to the space of k forms in n=3k). If you think there is this new thing called "geometric algebra" other than usual tensor products, it may not be obvious how things generalize to n != 3.
Yes, that's my sense too. Of course cross products, wedge products etc make sense and that's just standard mathematics, but the part that I haven't really seen the point of is to form the algebra where all these forms live side by side.
It doesn't seem like a useful "fusing", in the way that say the complex plane is.
Of course it's very cool that sub-algebras in 2 or 3-space in GA are isomorphic to the complex plane or even quaternions, but it still feels a bit made up.
For a concrete example, one youtuber showed how Maxwells equations simplified to a single equation if you introduce an operator that is a combination of div and curl, and also a new kind of physical entity that combines the electrical and magnetic fields.
This is of course cool, but what I want to know is if this new operator makes some physical sense, and if the new multi dimensional field has any physical meaning. If they don't, it just seems like a parlour trick.
Not saying they actually don't, but I haven't seen any deeper explanations of it.
It has the dual quaternions as even subalgebra (in 3D), and unifies all linear and angular aspects. It leads to remarkable new insights, as removing the need for force-couples (pure angular acceleration is caused by pushing along a line at infinity), while pure linear acceleration is caused by forces along lines through the center of mass.
These geometric ideas are independent of dimension - forces, both angular and linear are always lines. The treatment of inertia becomes a duality map, and things like Steiners theorem are not needed at all.
On top of this, the separation of the metric that sets GA apart means that this formulation of rigid body dynamics works not only in flat Euclidean space, but unmodified in the Spherical and Hyperbolic geometries. (by a simple change of metric of the projective dimension).
For a (graphics/game programmer oriented) tutorial on this see https://www.youtube.com/watch?v=pq9YfdPHhIo&ab_channel=Bivec...
I remember for a long time, people coming from the math end of things would look down a bit on physicists laboriously working everything out in complex tensor notation when there are these elegant canonical descriptions arising from differential geometry that look very simple and beautiful and are completely coordinate-invariant.
But then when you want to actually calculate something, you end up doing all the painful tensor contractions anyway, so the physicists would likewise often lookdown on the mathematicians for writing these simple one liners that described all of mechanics but not really understanding how to calculate stuff.
So if repackaging some of the basic facts of differential geometry as "Geometric Algebra" gets physicists to be excited about it, then that's a good thing. Just like repackaging some of the laborious tensor calculus computations into differential geometry has gotten a lot of mathematicians excited about physics. It really is much more pleasant to work in a coordinate-free manner using differential structures associated to the natural manifold suggested by the problem, rather than being stuck in euclidean space and needing to deal with lots of fictional forces and complex change of basis formulas.
For example, look at this text:
Back when I was in university, we covered this in our differential geometry class. And yes, you'd use more abstract concepts like curvature, hodge dual, and exterior product.
Maxwells equations in any dimension can be reduced to:
dF = 0 and d*F = 0
That's two equations, not one, but you can introduce a new D = (d, d*) and then get DF=0 if you want.
The advantage here is the d, and F have all the old physical meanings. F is curvature, which is the electro-magnetic field E+B, and d is the derivative (exterior derivative, but that is the derivative needed in calculus).
Here is a derivation:
(This assumes you believe a "Field" has a physical meaning.)
Sure, the same information is contained in the F tensor, but it doesn't have a similar straightforward geometric intuition.
Looking at the link reminds me that this entity is used outside of GA, but it still feels a bit weird. More weird than say complex currents, but maybe it actually isn’t.
It's a shame, because the whole theory is a very useful (eg for engineering / applied math) superset of linear algebra.
I really wish I had learned this first in my undergrad years, would have made a whole bunch of things way clearer from the get go:
Its initial discovery was not too late, but both Clifford and Maxwell died too young in 1879 and after their too early death there was no one left who could finalize the applications of this theory to physics.
In their absence, the geometric algebra theory was ignored and both the theory of vectors and the theory of electromagnetic field were simplified to forms which are good enough for restricted contexts, but which are nonetheless inconsistent and fail in more general cases (the so-called Maxwell differential equations are valid only in much more restricted conditions than the original integral equations of Maxwell).
As a child I have also learned the theory of vectors in the incorrect way, e.g. including the so-called "vectorial product", so I lost time later until understanding that it is not a vector, and then I lost more time until understanding that the so-called pseudovectors a.k.a. axial vectors and the so-called pseudoscalars are not independent entities that come from nowhere, but their existence is just the natural consequence of the properties of the vectors.
It would have been much more efficient if the complete theory of geometric algebra would have been taught from the beginning.
It is really a shame that article does not clarify that, btw, what we've just derived is a re-derivation of a thing that has already been expressed and named, by Clifford, and well-characterized: https://en.wikipedia.org/wiki/Geometric_algebra
Such a bummer to see very slick but very ahistorical articles.
As an educator, though, when I see presentations of existing ideas that present them as if they were new, I die a little. You're standing on the shoulders of giants whether or not you think so, and whether or not you say so. It's best to figure out who the giants are, and how you're standing on them. When you are up front about the connections to past scholars, you are giving the credit to those scholars that they deserve, and you are strengthening the storyline, and you are setting a good example for the people that look up to you.
You can add a few sentences at the bottom saying, if you've gotten this far, congrats, you understand some basics of GA, and then link to other resources about the history and current applications of it.
> Instead, I will use: ... Because it forces us to remember what those coefficients are attached to. Knowing that a bivector contains five degrees of freedom, can you figure out what the other two describe?
I'm confused here and don't understand why they keep saying a bivector has five degrees of freedom. If you can uniquely identify one with three scalar coefficients, doesn't it only have three degrees of freedom?
First, they show some algebra formulas and mention dot product and cross product. But then they start introducing a definition of a vector! With images!
Why, oh why do you need to waste yours and reader's time to introduce basic definitions, if any reader of the article definitely knows that? If they haven't, they wouldn't be able to read the first paragraph at all.
PS: Russian style of explanation is more like: "Here's the essence of my idea, maybe with some leading pre-definitions, but definitely without basics. If you are here, you probably is as curious as I am to already know/heard of all the basics."
In total, there's more material, because it's easier to write and read it, as author didn't need to explain 101s to PhDs.
Material such as this serves to remind the reader of what they already know, and contextualize it in a way that is relevant to this article. The article begins by telling the reader, "We all know what scalars and vectors are---here they are---but have you wondered what if ...", and taking the reader beyond. The introduction which you seem to find objectionable is only a small part of a much longer article.
In addition, different readers have different, mildly different notation styles. These introductory blobs inform the reader of the language in the article, and are essentially a friendly statement of definitions.
A third purpose is rhetorical: readers sometimes get stuck while reading text, and these parts of the article work as anchor points where they can loop back and "synchronize" with the writer.
I read a Synopsys simulator manual that explained what double clicking was.
I include 3 sentences defining a scalar so that I could introduce the concept of grade.
I include a few sentences defining a vector because just read the comments here and you'll see there are many definitions of vector and I want to specifically call out the one I care about in this post. I am also using a nonstandard, color-based notation throughout the article so it is helpful to take a concept that people already know just to demonstrate my notation. This also lets me introduce the 3D interactive illustrations.
Did you read the rest of the article or were these two definitions so objectionable that you quit?
 The local structure of turbulence in incompressible viscous fluids at very large Reynolds numbers, Dokl. Akad. Nauk. SSSR 30, 299-303. Reprinted in Proc. R. Soc. London A434, 9-13 (1991).
The situation that leads to weak typing in computer algorithms -- when you get data from a file or another process and don't know in advance what type it's going to be -- is basically non-existent in blackboard mathematics. Rigorous mathematical papers always tell you what set a variable belongs to when it is introduced, as well as the domain and co-domain of any functions that are defined. This is the blackboard equivalent of strong typing.
Take for example the law of total expectation, usually written as E( E(X|Y) ) = E(X). It's totally non obvious (so much that it's harmful IMO) that the outer E is a function of Y. Hiding the summation parameter of E does nothing but hurt math learners here.
Many of those abuses are for a very good cause. Leibniz notation is very powerful, for example, but it's hard to master and physicists really go nuts with it.
For E[X|Y], all you have to remember is that conditional expectation yields a function f(Y) of the thing you conditioned on, and f(Y) is itself a random variable that you can take the expectation of. This property is canonical and baked into the formal definitions. It's not an abuse.
However, I do fault some machine learning types a bit for abusing probability and statistical notation. For example, the Elements of Statistical Learning book extensively overloads E, P(), and other symbols and operators in ways that it doesn't even bother to define. They randomly throw subscripts and decorations onto all sorts of symbols and don't even bother to tell you if those decorations mean they're marginalizing, conditioning, or something else. The book has no glossary of symbols and operators and no preliminary chapter setting out notation, which is unusual for such an enormous book full of hundreds of equations. It would be impossible because the book is a hodge-podge of symbols that change from paragraph to paragraph.
(Statistics is as much maths as theoretical physics is, both are technically mathematics but in practice the field is handled in a very different manner since they are applied and intended to solve a specific set of real world problems and hence not pure)
Since no machine is ever going to read it, it's all up to individual taste and prejudice.
How would a math linter work? Don't see it, but this is one of the things that makes maths a different world from programming.
Clear and short notation, that is just unambiguous enough, is a very important factor, without it books wouldn't just be much longer, I'm not sure we'd even be able to understand it.
I'd appreciate a post from Matt Ferraro on how this is built. Bonus points for including nice syntax-highlighted code "widget" for a cross between maths/programming.
> It is important to remember that bivectors have a certain redundancy built into them in the sense that
. We can write them using 6 numbers or 3 numbers, but they actually convey 5 degrees of freedom.
Three (real) numbers have three degrees of freedom, by definition. (And nothing about complex numbers was mentioned.) Is this a parody I don’t get? I feel like I have wasted ten minutes on nonsense.
That isn't correct either though, the basis consists of three unit bivectors, so they have at most 3 degrees of freedom.
Thanks for taking the time to write it, and for making the notation easy to write and understand.
This is going to confuse readers. A point on a number line is a 1D vector; in other words it is a unit vector pointing along that number line, multiplied by something which scales its length. It’s the latter dimensionless and directionless quantity that’s the scalar.
The set of translations of the straight line is a vector space a.k.a. a linear space.
So the vectors are the classes of equivalences of the differences between 2 points on the straight line (i.e. the differences between 2 pairs of points, where the distances are the same, are equivalent and they determine the same vector).
While the vectors are classes of equivalence of the differences between 2 points, the scalars are classes of equivalence of the quotients of 2 (collinear) vectors, i.e. a scalar is the ratio between the signed magnitudes of 2 collinear vectors.
If you choose a point on the straight line as the origin, you can choose as a representative of each class of equivalence that corresponds to a vector, the vector corresponding to the origin point together with another point. This gives a bijective mapping between vectors and those second points.
If now you also choose a vector as being the unit vector, which will correspond with a second point besides the origin point, together with the origin point, then you can choose as a representative for each class of equivalence corresponding to a scalar the ratio between a vector and the unit vector, which will correspond to a third point, besides the origin and the point corresponding to the unit vector. So you obtain a bijective mapping between scalars and those third points.
Because on a straight line there are bijective mappings between points, vectors and scalars (after choosing 1 origin point and a 2nd point as the extremity of a unit vector), they can be used interchangeably in most contexts, but it would be good to remember that all 3 are in fact different mathematical entities.
Can someone tell me how I would use τ=r∧F on a physics problem for Torque?
The distinct gets a little more useful when you start dealing with covariance under coordinate transformations. There it becomes more meaningful, because the _vector_ given by r x F doesn't transform the same way as their cross product should.
For an obvious example of why this is true: suppose r=x and F=y. Then r × F = z. If you change coordinates by mapping z -> 2z, then you would be doubling the torque that you computed .. which is wrong; the torque is unchanged. The bivector x^y is correctly unchanged by z -> 2z.
Currently in physics courses (usually not until more advanced mechanics or relativity) the resolution to this is to wave ones' hands and declare that, no, torque is a 'pseudovector'. But it is really much easier to think about if you type it as a bivector in the first place.
I wonder what would that look like (mathematically), and what surfaces or fields would it create?
on the plus side I found it now...
“In this post we will re-invent a form of math that is far superior to the one you learned in school. The ideas herein are nothing short of revolutionary.”