Hacker News new | comments | show | ask | jobs | submit login
The Exterior Algebra and Central Notions in Mathematics (2015) [pdf] (ams.org)
73 points by jpelecanos on Nov 18, 2017 | hide | past | web | favorite | 44 comments



This stuff really isn't that hard once it is described well (which doesn't need to involve any simplification). I can't wait to see what happens to the programming community once these notions are embraced with the love and respect that it deserves. If anything is going to make a big dent in the field, on a 5-10 year horizon, it'll be this. What we need to do is not to think of it (modern concepts in pure mathematics, especially from algebra) as something separate from programming, but to allow it to mix and skew the fundamental language that we use to reason about coding.


I'm curious: why and how do you think Exterior Algebra will have such an impact on programming in general? What programming constructs that EA capture that are currently not well-abstracted (apart from graphics/linear algebra)? And can EA not be expressed anyway in modern languages with expressive typing systems such at Haskell/Scala which even have higher-kinded types?


Maybe some admixture of machine learning meets compressive sensing meets logic meets complexity theory, and maybe some new software development tools/paradigms too


I don't think this will make a big dent on account of geometry being fundamentally simpler and easier to reason about than language in general, and thus mathematicians dealing in geometry have a much more workable framework for employing an algebraic approach than do computer scientists.

I think if these ideas were going to be revolutionary, they'd already have been. They're not new, unfamiliar, or esoteric, they are in standard curricula.


Exterior algebra is elegant. It simplifies some things. But it's mainstream mathematics. For example, it's covered in the little Spivak (not to be confused with his elementary Calculus book).

As a pure CS person, I'm more interested in advances that push forward constructive mathematics (which has a huge overlap with CS). That's why Martin-Löf's type theory and its further development into HoTT as a new alternative foundation for mathematics is something I'm following with great interest.


The way people interact with differential forms and tangent bundles and so on is somewhat different (and takes much more conceptual machinery) than the way multivectors and geometric algebra (“Clifford algebra”) would best be presented to novices or non-mathematicians.


Yes but also no. One of the most fascinating applications I've seen is here: http://versor.mat.ucsb.edu (and in subsequent work).

There's already a decent tl;dr of the approach on that page, but the tl;dr of that tl;dr is that conformal geometric algebra seems to provide not only a rich language of geometric operations but--importantly!--these operations seem to compose intuitively and interpolate well ("well" in the same way interpolation between quaternions afford the most-natural interpolation between 3D transforms).

The work there uses it to develop tools for parametric design...I'd be very curious to see it extended (e.g. to include time dimensions).

What is notable though is that whereas a lot of the "hype" around geometric algebra revolves around the ostensible intuitiveness--"look, we only need Maxwell's equation, singular, in GA"--actually using it effectively seems to require acquisition of a lot of vocabulary and concepts (e.g. to make use of versor you need to know about blades, rotors, etc.)...the learning curve to use the material is actually steep. That wouldn't prevent it being used as the backend of some tool, but the idea that GA is intuitive and "easier" than alternatives doesn't seem to hold up in practice (IMHO)...at least for uses like these.

Then there's the efficiency issues in that, in general, each term in a k-dimensional geometric algebra will have 2^k coefficients (and thus adding two terms is ~ 2^k operations and multiplying is ~ 2^(k+1))...a good implementation will have a lot of difficult tradeoffs to consider.


It is intuitive and easier in the sense that when you don’t have that vocabulary and bag of concepts – trying to work in some other formalism – all of your arguments end up being much more complicated and cluttered.

As for the number of scalar parameters involved: if anyone wanted to put significant time and money in, as has been done with matrix computation libraries, most of the 0×0 multiplications can be skipped, and the rest can be efficiently SIMDized, etc.

If the concern is bandwidth over the wire, there are often ways to compress things. E.g. we can take the stereographic projection of a rotor and then reduce the precision to save a lot of I/O without losing accuracy.


For anyone else curious, versor is on github: https://github.com/wolftype/versor


I’ve been looking into this stuff as of late (I bought like 400 bucks worth of books on it). I always kinda liked math but always felt like some parts don’t make any fucking sense (cross product only in 3d? Nice algebra bro). Also wtf are complex numbers really about? Geometric algebra resolves all this and it has insane applications esp for any space related reasoning. Space time algebra is very much related and it’s also balls to the wall insanity.

Maxwells four equations reduce to a single equation, a simple fraction. It’s beyond me we aren’t going hard on this.


Don't start with geometric algebra. Start with abstract algebra. It is the assembly language, if you will, of geometric algebra (and of a lot of other stuff). It'll also give you a new (from first principles) understanding of complex numbers. And don't read explanations by programmers of abstract algebra, those tend to be very shallow and only touch on a few definitions. Also, don't just read these books. The real gold comes after you've struggled with proofs and a mountain of exersizes.


This ^. If you wonder, 'what complex numbers are all about', then chances are you can't give the universal property of the tensor product, or the definition of a Riemannian manifold, or a complex Lie group, either.

I suggest to look at the literature listings of any undergrad math curriculum to get suggestions on what to start with. There are reasons 'linear algebra' is usually taught before 'Applications of the representation theory of Lie groups in special relativity'.

Or maybe I'm too conservative in my view on math education?


While I agree with the spirit of your post, starting with abstract algebra is probably too formal and abstract. The best way to learn something is by learning the concepts first and progressively formalizing, always with some link in the chain connecting back to tangible experience.

I would possibly throw this out as an interesting way to learn a bit (not really a whole lot, but a fun start) about abstract algebra: https://www.crypto101.io/


I respectfully disagree with most of this (except for the struggling with problems part).

A ring/field-theoretic construction of complex numbers as a quotient R[X]/(X² + 1), an extension of the field R by the roots of the polynomial, is interesting/neat, and can be generalized in various interesting ways for number theory, algebraic geometry, etc., but is not an effective/intuitive early understanding for someone who wants to use them in geometry, physics, general data processing, etc., because most of its content is purely formal/abstract.

Thinking of the complex numbers as quotients of vectors in the Euclidean plane (under the geometric product) is a much, much more fruitful and enlightening interpretation, and makes the main theorems of complex analysis make clear geometric sense.

Most importantly, the basics of geometric algebra are very accessible to e.g. high school students. It could be used to completely replace typical courses in trigonometry, partially replace & supplement a Euclidean geometry course, partially replace & supplement instruction in solving systems of equations and basic matrix algebra, etc. A curriculum which was wholesale infused with GA tools and reasoning would give students more powerful tools and more vocabulary earlier, would help them solve harder problems in most of their math/physics courses, and would ultimately unify and simplify a lot of the mathematical modeling tools used throughout any undergraduate technical education.

To learn group theory, ring theory, etc., you want to start with a nice stable of explicit concrete examples, explored deeply (I would recommend Nathan Carter’s book Visual Group Theory as a start for someone with a high school math or undergraduate engineering background). Otherwise the theorems and proofs are just symbol twiddling and pattern-matching to some assignment handed down from above. Ideally students would start in on some examples in primary/secondary school (tessellations, transformation geometry, modular arithmetic, some basic number theory, polynomials treated as vectors, ...), and be pretty well prepared for an abstract treatment by the time they are undergraduates. Unfortunately there’s not much time for this in a typical undergraduate mathematics sequence, so most classes in those subjects are done entirely abstractly from axioms, with sparse examples only examined superficially. Many if not most students leave these courses very confused about the meaning of what they have been doing. GA provides some great examples for an abstract algebra course.

If you care about powerful abstract tools, (both finite and Lie-type) groups can be embedded in a geometric algebra and studied using GA tools in a similar way they can be embedded in linear algebra and studied as matrix groups (“representation theory”). In many cases the geometric algebra representation is both easier to understand and gives more tools to use. Both projective and affine geometry can be done with geometric algebra, and differential geometry can be recast as “geometric calculus” which reveals some structure usually hidden in the typical differential form version. See the papers at http://geocalc.clas.asu.edu/html/GeoAlg.html http://geocalc.clas.asu.edu/html/GeoCalc.html


> Thinking of the complex numbers as quotients of vectors in the Euclidean plane (under the geometric product) is a much, much more fruitful and enlightening interpretation, and makes the main theorems of complex analysis make clear geometric sense.

On point! Like I feel my understanding is grounded in something I can reason about myself as opposed to trying to see the relationships that someone else came up with.

I also feel like most commenters misunderstood what I meant when I said “what complex numbers are about”.

What other math books can you recommend?


What $400 of books did you start with, and what else have you read? And what is your other mathematical background?


This is a really beautiful proof that connects complex numbers and primes to derive a formula for calculating pi. It should only require high-school algebra prereqs.

https://www.youtube.com/watch?v=NaL_Cb42WyY

If you want to truly follow the proof, there's one little bit on the chi function in there which is connected to something called the One Sentence Proof. This looks rather esoteric and takes a bit of pondering to wrap your head around, but again only really requires high-school level algebra.

Doing proofs like this is what math is really about. This is how you come to really understand. And this video also happens to be entertaining in how well-put-together it is.


I mean I understand what they are however this isn’t really the thing about them that I’m pumped about.



All versions of the cross-product are vector-representations of bi-vectors. The 7d cross-product exists, but in general for [for a,b in R^7] a x b does not have a unique solution, so it's pretty unsatisfactory as an extension.

The problem is, the cross-product uses the normal vector of a plane to represent a 2-d plane (the unique plane that a and b lie on). But 2-d planes only have a unique normal vector in 3-dimensions. With more dimensions, you have more normal vectors, and more answers to a x b.


> Also wtf are complex numbers really about?

Complex numbers have an intuitive way of thinking about them, they’re just not frequently given sufficient exposition for clarity. The fundamental concept to internalize for complex numbers is the square root of a negative number. In the real world, calculations involving square roots of negative numbers are practical. If all you have are the real numbers, you cannot resolve equations involving the square root of a negative number, like x^2 = -5. This isn't arbitrary - it becomes very handy when you're dealing with problems that can run into complex numbers. For example, we can use complex numbers to model phase and magnitude in physics and electrical engineering.

Complex numbers resolve this problem by segregating the discrepancy and reducing it to an imaginary unit i, where i^2 = -1. If you can agree to the definition of i, you can usefully model any square root of a negative number; if you recall that 1 is the multiplicative identity (1x = x), -1 can be considered to be something like the complex multiplicative identity. In other words, definining it axiomatically allows you to extrapolate the square root of arbitrary negative numbers using real numbers and i. In this way, the complex numbers extend the real numbers and encompass them.

If you’re with me so far, this next part might improve your intuition. A complex number is typically written z = a + bi, where a and b are real numbers and i is the imaginary square root of -1. This is presumably what you’re familiar with. For the complex number z, we have a real part, a, and the imaginary part, b. More formally, Re(z) = a and Im(z) = b. Instead of reasoning about complex numbers using this form, you can instead represent them as ordered pairs of the form (a,b), where a and b are real numbers and the same real and imaginary parts of z, respectively. This form doesn’t represent i explicitly, which might be more straightforward to understand. Moreover, complex numbers of the form (a,b) can be modeled as points in a plane, because they are coordinates.

Recall that a plane is a two dimensional coordinate space; a line is a one-dimensional space, and you can generalize this conceptually to k dimensions, represented by R^k. An ordered pair is a 2-tuple, which generalizes to a k-tuple, such that the points in a k-dimensional space R^k are k-tuples, (x1, ..., xk). Therefore, when we’re talking about ordered pairs, we’re talking about planes, and each element in an ordered pair is a dimension of the plane. For any given real number a, the real number’s ordered pair is (a,0). i = (0,1); for any x = (a,b) and y = (c,d), xy = (ac - bd, ad + bc). This defines multiplication on the set of all complex numbers (recall, then, that any set defined with operations for addition and multiplication becomes a field, so this along with addition establishes the complex field, extending the real field).

Because any complex or real number can represented as an ordered pair of real numbers (a,b), we use R^2 to represent the complex plane, where R traditionally denotes the set of all real numbers. Finally we get to our result: by modeling the already intuitive real numbers and the complex numbers as points in a plane, we can practically model and reason about real world 2-dimensional problems using geometric and topological methods that would be unresolvable without a square root of negative numbers. Naturally, this features heavily in mathematical analysis :)

Now that we've gotten this far, we can look at a basic, practical example. Let's say we want to model a voltage or current on a two dimensional plane. Let z be a complex number representing the voltage or current on the complex plane R^2. Further let the vertical axis be Im(z) and the horizontal axis Re(z). Then we have line segments x and y representing the real and imaginary parts of z respectively, and drawn from z to the vertical Im(z) and horizontal Re(z) axes, respectively. The phase is initially represented by 0 (the origin), and the magnitude is a line segment drawn from the origin to the point of z, where the real and imaginary parts of z intersect on the plane. You could use trigonometry to reason about phase differences in a current, but it is much simpler to instead represent them as linear equations involving complex numbers.

I hope that helps a bit to build your intuition. If you can sit through it and work through the proofs, I have found that Rudin’s Principles of Mathematical Analysis is really excellent for building up the number systems from first principles in the first chapter. It’s dry and might take you 10 minutes per page, but it’s self-contained and will really enhance your understanding of why we have systems of numbers from a unifying theoretical perspective.

For example, you can (hopefully) see how complex numbers are really just a resolution to the problem of square roots of negative numbers, and that they have a practical application for modeling problems in physical space. You can think of irrational numbers in a very similar way: just as complex numbers extend real numbers with this resolution, irrational numbers extend the rational numbers with a resolution to the problem of square roots that have an infinite decimal expansion. We can look at the square root of 2: there is no rational p to satisfy p^2 = 2.

The intuitive theory of irrational numbers is not that they're "infinite decimals" (which leads to the question, "What are infinite decimals and why do we care about them?"). The intuitive theory of irrational numbers is that they are a set of numbers which resolve equations that we cannot resolve using only rational numbers. Similarly, the intuitive theory of complex number is not that they're numbers with some arbitrary "imaginary" part embedded in them, they're a system of numbers that express ideas we cannot reason about using only the reals.

Putting all of this together, we have:

1. The set of all natural numbers N: {0, 1, 2, 3, ...}. These are useful for counting.

2. The set of all integers, Z: {..., -2, -1, 0, 1, 2, ...} ("Z" stands for "Zahlen").

3. The set of all rational numbers, Q, where Q stands for quotient.

4. The set of all irrational numbers, sometimes denoted by I.

5. The set of all real numbers, R, which contains both the rational and irrational numbers.

6. The set of all complex numbers, sometimes denoted by C, which contains both the real and complex numbers.

Each number system is an extension of the number system preceding it, and expands the system in a way that allows new axiomatic operations to be defined such that we can resolve problems observed (and unsolvable) in the previous number systems.


IMO the “proper” way to learn about / think about complex numbers is as quotients of Euclidean planar vectors. http://geocalc.clas.asu.edu/pdf/GrassmannsVision.pdf

Your version as an alternative gives much less insight about their structure.

Rudin’s book is full of good problems for analysis students who want to be inducted into the exclusive club of mathematicians, but is not really the best way to learn about the meaningful mathematical modeling potential of number systems, IMO.


Actually I agree with you. That’s a good point. To clarify, the reason I like my approach is not because it’s comprehensive, but because it’s better than the traditional exposition for letting students build initial intuition. In my experience people dislike learning about complex numbers because the notation of z = a + bi feels arbitrary to them, and there’s not enough context in which to motivate them.

To your point, representing them as quotients of vectors is definitely better for overall understanding. But for a quick and dirty approach that still involves some theory for building intuition of why complex numbers exist and are useful, representing them as ordered pairs is better than what’s usually done, which is the introduction of complex numbers in computational courses instead of theoretical ones. Ordered pairs are extremely intuitive because they can be plotted on two axes, and topological reasoning in general is fairly intuitive in lower dimensions.

Were I not confined to a small comment I’d definitely go with your approach, though. I’m just optimizing for solving a student’s “what even is?” initial intimidation, not for really working with them comprehensively.


The problem I have with just an “ordered pair” pedagogy is that it confuses (in students’ minds) the concepts of Euclidean point, displacement vector between points, and “complex number” (which I think of in most contexts as a similarity transformation of Euclidean vectors).

In an arbitrary Euclidean vector space, the coordinates chosen are somewhat arbitrary, except for being typically orthogonal and of the same length (a sometimes arbitrary chosen or sometimes meaningful “unit”). But similarity transformations are in some sense “unitless” (insofar as they don’t involve meters or inches or whatever, because they are quotients of vectors and the units cancel): the value 1 is very meaningful and not arbitrary at all (it represents the identity transformation), the value –1 represents a reflection of vectors through the origin, the value i is a unit bivector (in this case representing a specific quarter-turn rotation), and all values of unit magnitude (e.g. √3/2 + i/2) represent pure rotations.

To teach someone who has done some analytic geometry in Cartesian style (which is itself sort of jumbled conceptually, because students are taught that the grid is the plane instead of being an arbitrary coordinate system laid down over the plane), explaining this takes quite a bit of nuance and clarity on the part of the instructor. And even many mathematicians and technical professionals elide these distinct concepts all the time. For example, they represent points (e.g. points on plane curves), vectors, and transformations all as complex numbers. Or they represent points as “position vectors” without always making it obvious that the point per se also involves adding the displacement to an arbitrarily chosen origin.


maxwell's equations also reduce to a single equation using differential forms

i find the "geometric algebra" formulation rather dull, on the contrary


The neglect of the exterior algebra is the mathematical tragedy of our century.

Why has it been neglected?


If you look at the paper you will see that it uses a relatively advanced mathematical language. Did you understand the significance and meaning of exterior and geometric algebra from that? If not, then there you have a partial explanation for it. It is much easier to just understand vectors and matrices and maybe tensors for practical and applied work.


I do not agree with this. The simplest mathematical concepts can look very complicated when presented in a formal way.

Learning the geometric product in high school wouldn't be more difficult than learning the dot and cross products, and would make obvious difficult to grasp concepts as complex numbers and even quaternions.

There are historical reasons for which we do not learn this from another point of view, and in my opinion it is, indeed, a great tragedy. A tragedy that I hope will be remedied some day.

Disclaimer: I deal everyday with 3D rotations. Euler angles have been traditionally used in my field, but they present many problems. Everybody knows we could do better with quaternions, but very few people understand them. I have shown many people how to interpret what quaternions are from geometric algebra concepts and I have not yet found anybody who doesn't think it is much more approachable that way.


I do not disagree with you! For example the book http://faculty.luther.edu/~macdonal/laga/ gives an elementary introduction to the subject.

I have learnt about geometric algebra just this year, and applied it to compute the graphics in an app I wrote for a customer. It was a real eye opener, concepts that I struggled with before were really simplified by using geometric algebra.

BUT: I would never have guessed the usefulness of it for me from this paper.


Second recommendation for laga.


The formalism is pretty hard to grasp indeed, but I guess it concerns any mathematical theory. From what I know Exterior / Geometric Algebra is much simpler and more intuitive than let say Linear Algebra.


That's wrong, as Geometric Algebra IS linear algebra. So you first have to understand normal linear algebra (vector spaces, inner product, etc.) and then you can properly understand geometric algebra.


You have it backwards. To understand the inner product you first need to understand the geometric product. ;-)

More seriously though, “linear algebra” is often used to mean “matrix algebra”, which you do not need to understand the basic concepts of geometric algebra. Coordinate-free concepts in linear algebra are geometric algebra concepts, and can be easily taught in a first course on the subject.

What you do need to do is first learn about Euclidean vectors as displacements of Euclidean points (and have some basic grounding in Euclidean geometry of points and lines and circles), after which you can learn about the geometric product of vectors, and the various kinds of multivectors and derived products (e.g. the inner product) which are produced out of that product.

Students can wait until after they have studied the basic concepts to learn more generically about quadratic forms, arbitrary linear transformations (which can be extended to multivector transformations via the “outermorphism”), and so on. And might never need to get into mathematicians’ more abstract/formal concepts of rings and modules and Lie groups and so on, though if they do want to they’ll have some better examples and better intuition about it.


Hmm, that is interesting. Let me cite a book Geometric Algebra for Computer Science:

To understand the structure of the book, you need a better feeling for what geometric algebra is, and how it relates to more classical techniques such as linear algebra.

I followed first chapters of this book and it does not require any knowledge of Linear Algebra.


I read that book as well, it is great for motivation, but mathematically a little sloppy (otherwise you wouldn't come away from it thinking you don't need linear algebra...). It doesn't say so, but it first introduces the linear algebra you need to know: vector spaces, inner products, etc., but without rigorous proofs for basics.


I'm not sure if I would say LA must be learned first, certainly a study of vector spaces and inner product spaces, but not necessarily a complete course in LA. But you will understand each better by understanding the other. If you understand neither, this book covers them together. http://faculty.luther.edu/~macdonal/laga/


I think there is a lot of misunderstanding out there what LA is. For example, LA certainly does not have to include the cross product. Just do introduce the basics of LA like vector spaces, linear mappings, bases, eigenvalues, singular value decomposition, etc. can easily take a semester to grasp properly (if you haven't had any exposure to that sort of thing before).


might seem funny but multilinear algebra (ie geometric algebra) is more intuitive to me than linear algebra (ie vector algebra/calc..)


I don't know what the quotation author meant. Exterior algebra has definitely been well understood and frequently used tool in 20th century.


You are most likely right, but it is still kind of a niche. If you take a look at curriculum of Computer Science studies it is pretty non-present. From what I know Exterior / Geometric Algebra is superior in many ways to Linear Algebra: first it is backwards compatible and second (apparently) it is much more intuitive. But still Linear Algebra is pretty standard.


Exterior algebra is not particularly useful in Computer Science, and that's probably it's non-present. There are other things competing for place in curriculum, and they probably deserve it more.

Exterior algebra is not any more superior to linear algebra than multiplication is superior to addition. Both are important, there are important connections between the two, and you definitely need to understand addition first before you understand multiplication.


For a gentle appetiser, see Spivak's Calculus on Manifolds. It introduces exterior algebra as a means to and end in doing computations on (embedded) manifolds. At first, the definitions seem opaque and the formalism clunky. At the end, Spivak recovers Stokes' theorem as a computation. This was a breath of fresh air for me after I took a warped version of Calc III.


Doesn't the example show that

    (ii): e1□e2 = -e2□e1
and

    (iii): (e1□e2)□e3 = e1□(e2□e3)
contradict each other? In the end we end up with

    (e1□e3)□e5 = e5□(e1□e3) 
which seems to go against (ii)


No: (ii) only holds when e1, e2 come from the original vector space (not the product).




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: