
A Geometric Algebra Implementation Using Binary Tree - grondilu
https://hal-upec-upem.archives-ouvertes.fr/hal-01510078/document
======
macawfish
Yes! I've been looking for something like this for months now! A few weeks
back, as this paper was being published, I remember a night where I was
walking down the street wondering if such a thing was possible, to use some
kind of tree structure to do efficient Clifford algebra products in higher
dimensions. It must have been in the air :)

Consider that a Clifford algebra of a 15 dimensional vector space has an
associated 2^15 = 32768 dimensional basis. To be able to do efficient
calculations in arbitrary subspaces of such a huge algebra is amazing.

That said, higher dimensional Clifford algebras are related to lower ones by
very well defined rules:
[https://en.wikipedia.org/wiki/Classification_of_Clifford_alg...](https://en.wikipedia.org/wiki/Classification_of_Clifford_algebras)

------
vmarsy
Strange to see an academic paper without any context making it on the front
page of HN, I didn't know that many people would be interested by Geometric
algebra.

The abstract was promising:

> This paper presents an efficient implementation of geometric algebra, based
> on a recursive representation of the algebra elements using binary trees.
> The proposed approach consists in restructuring a state of the art recursive
> algorithm to handle parallel optimizations.

Unfortunately I found the results and conclusion disappointing.

That method sounds interesting for the problem, unfortunately there's no
figures/tables presenting the results at all, and no mention of parallel
optimizations (Unless the SIMD optimization is what the authors refer as
parallel optimizations. I was expecting multi cores optimizations, or at least
shared memory multi-threading or GPU computing).

They focus heavily on theory, but only ~2 pages out of 21 are about results.
I'm used to read papers where the experimental results describe the
experimental setup (What hardware etc.)

They only give small hints about expected results such as:

> The results show here an overall 25% relative gain over Gaigen mainly due to
> the SIMD implementation

But it's hard to wrap your head around this. Is it SIMD on a 4 core machine? 8
core? 2 cores? Knowing the hardware setup would change my perception of the
results. Especially the prior-art mentions "Gaalop" a CUDA/OpenCL
implementation. Why is that not what they compare their results with?

The authors describe Galoop as " _the most recent efficient implementation_ of
the geometric algebra" but yet seem to contradict themselves in the
Conclusion:

> We show that the proposed method is at least as fast as _the main state of
> the art specialized geometric algebra implementations_.

Is Gaigen or Galoop the state of the art? I found this confusing.

~~~
Jach
There are lots of mathematical people on HN, so it's not too surprising. Then
it also wouldn't surprise me if there were more people like me: I have a
romanticized idea of geometric algebra as a better way of understanding things
I know and things I'll someday learn, and will upvote/bookmark/save-for-later
a lot of what I come across about it on the chance I get the combo of
time+motivation to overhaul my linear algebra knowledge and go really deep
into geometric algebra on both the analytical side and the computational side.

Thanks for the summary.

------
sangpo_pepys
geometric algebra is one of my favorite-est things. for more info and history,
check this out:

[http://geometry.mrao.cam.ac.uk/](http://geometry.mrao.cam.ac.uk/)

~~~
grondilu
One of the drawbacks is that it's fairly slow computationally. I wrote a Perl
6 module[1] and I'll try to update it with this binary tree method. Hopefully
I'll see an improvement.

1\.
[https://github.com/grondilu/clifford/](https://github.com/grondilu/clifford/)

~~~
sangpo_pepys
why perl? just curious

