Hacker News new | comments | ask | show | jobs | submit login
The Octonion Math That Could Underpin Physics (quantamagazine.org)
412 points by jonbaer 7 months ago | hide | past | web | favorite | 188 comments

I think this is the first article I've ever read about the octonions that didn't include the following John Baez quip:

"There are exactly four normed division algebras: the real numbers ($\R$), complex numbers ($\C$), quaternions ($\H$), and octonions ($\O$). The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on. The complex numbers are a slightly flashier but still respectable younger brother: not ordered, but algebraically complete. The quaternions, being noncommutative, are the eccentric cousin who is shunned at important family gatherings. But the octonions are the crazy old uncle nobody lets out of the attic: they are /nonassociative/."

Is there an easy explanation of what problems quaternion or octonions solve?

Imaginary numbers are needed to take the square root of a negative number, and complex numbers result from combining the new numbers with the real numbers. Complex numbers also allow solving roots that aren't found in just real numbers.

But I have no similar comparison of what I can do with a quaternion or octonion that I can't do with a complex number. I remember seeing some w based number system derived from the cube root of either 1 or -1 (forget which, but w and -w were the solutions that weren't 1 or -1), but it did all the same things that complex numbers do and was considered mostly uninteresting.

It also seems like there is a pattern to go infinitely beyond octonions, but they all behave identical to octonions, but are the octonions even needed in the same way complex numbers are needed, or do they just make some math problems easier to work with?

Historically, quaternions came about as a way to try to reason about three dimensional physics. I mean, complex numbers were obviously really nice -- two dimensional numbers you could meaningfully add, subtract, multiply and divide. But they were only 2-D and we live in a 3-D world and we want to do 3-D physics.

So Hamilton was trying really hard to find a way to have 3-dimensional numbers that behaved nicely, and he couldn't do it. But he did find 4-dimensional numbers -- the quaternions. And they were really neat. You can use them quite well for classical mechanics, and electromagnetism, and even special relativity. So, why don't we?

We look at Maxwell's equations of electromagnetism today, and they're really nice, single-line vector formulas. You can also write them as nice, single-line quaternion formulas. Our notion of vector didn't exist at the time the quaternions were first used, and it was a boon to have quaternion notation to simplify some of these physical laws. Vectors and quaternions competed for a bit, and vectors won since they generalize to arbitrary dimensions.

Hidden inside of quaternion multiplication, you can find the three-dimensional versions of the dot product and cross product. And they do have some theoretically interesting properties for number theory and abstract algebra. In the end, however, sometimes items are discarded in favor of better items. I'd rate quaternions as one of the coolest items that ultimately wound up in the discard pile.

Quaternions show up all the time in 3d graphics - they're how you represent rotation matrices without the problem of gimbal lock.

I don't know if that's significant but it was how I stumbled on the concept in high school when I was messing with Direct X.

Rotation matrices by themselves do not suffer from gimbal lock. I think you meant Euler angle representations with the rotations always applied in a consistent order around the pitch/yaw/roll axes.

Right. Just to add on, one reason Quaternions are still used in graphics (despite rotation matrices not suffering from gimbal lock either) is that they're easy to interpolate between, even if you have many.

If you just have two you can slerp (or not), but if you have a large number of them (weights from an animation system, for example), a basic weighted sum followed by normalizing is shockingly well behaved and extremely fast.

This is a timely comment. I've been aware of quaternions for rotation for decades but only learned yesterday that rotation matrices can be used for some use cases. I'd never heard this small fact until then and now I stumble across your comment!

Geometric algebra, related to quaternions, can represent Maxwell's equations in a single equation, using polyvectors and the inner and outer products.

The subalgebras formed by scalar and bivectors (1, x^y, y^z, z^x)--or the monovectors and the pseudoscalar (x, y, z, x^y^z)--from a three-dimensional geometric algebra have a lot of the same mathematical properties as quaternions.

I don't know whether or not octonions have a similar relationship with a 4-D geometric algebra, where one dimension is a timelike dimension, because that is one gnarly mess of anticommutative, nonassociative math to wade through.

You can summarize Maxwells equations in any dimension using two formulas: $$ \delta F = j d F = 0 $$ where $F$ is the field-strength two-form and $j$ is the current density.

Complex numbers tell you what happens if there is an i that i^2=-1. What happens is that you get cool way to express 2d rotations.

Quaternions tell you what happens if there are three different i's that have this property. And it tells that it leads to nice algebra that expresses 3d rotations very well (and even 4d if you believe the article).

Octonions tell what happens if there are seven such i's. And it leads to cool algebra that helps expressing.... we don't quite know yet.

Also considering other numbers of i's doesn't lead to anything coherent. Also considering different possibilities of what should be the result of multiplying one i by another different i doesn't lead to anything cohesive.

So... i^2=-1 apparently can have either zero, one, three or seven solutions and they have to have very specific relationships between themselves for calculations to make sense


I think the main curiosity stems from the fact that octonions are as far as you can go.

In math infinity turns up all the time so having there be exactly a finite number of anything feels weird.

There is an infinite number of them.


They're nuts, though.

You can do sedenions, they lose another property and beyond that they aren't interesting because no properties really change...

I've always found the five perfect solids kind of interesting for that reason. Seems so weird for the set to be so finite.

There are other "works only up to N, where N small +ve Integer" scenarios. e.g. :


Quaternions, in three words: "Rotations in 3D". I like this explanation: https://probablydance.com/2017/08/05/intuitive-quaternions/

One thing I'd like to add: they're not actually 4-Dimensional. (read the link for more on this)

>Is there an easy explanation of what problems quaternions solve?

Sure. Unit quaternions form a double-cover of SO(3).

In other words, you can encode a rotation of a 3-dimensional object with a single unit quaternion.

But wait, there's more! You could do the same with a matrix, or a triple of angles. Why not do that?

Answer: interpolation. The "natural" way you want to go from one rotation to another corresponds to exponentiation of quaternions. If you linearly interpolate matrices, the intermediate steps will do something nasty: they won't even be rotations!

The natural way to implement the Arcball interface for rotations is using quaternions. Here[1], I have implemented it in ProcessingJS and wrote up the math behind it.

Quaternions (like complex numbers) can do other things too, but this alone is a good start. Also gives you intuition why they aren't commutative: because rotations in 3-space aren't.

TL;DR: Unit complex numbers = rotations of plane. Unit quaternions = rotations of 3-space.

PS: you shouldn't think of complex numbers as the solution to the problem of "taking the square root of -1". Think of them as "how can I multiply/divide a 2D vector by another 2D vector?" - there's only one way to do it sanely (multiply/divide lengths, add/subtract angles). This is what the complex numbers are.

Hamilton was trying to solve the same problem in 3D, and couldn't (turns out, it is not possible[2]), but solved it in 4 dimensions, and later found many applications for them.

[1] http://www.math.tamu.edu/~romwell/arcball_js/index.html

[2] https://en.wikipedia.org/wiki/Frobenius_theorem_(real_divisi...

In my opinion this is the best answer, because it neatly explains why quaternions are more useful than vectors for rotation. Vectors are "nicer" because they generalize to arbitrary dimensions. But quaternions handle 3-dimensional rotations in (essentially) a single step. The point about interpolation is really important, because all the machinery offered by vectors becomes a burden.

It's also good to think about complex numbers in the way described here because it neatly abstracts the concept of numbers and maps them to a form of dimensionality. Great comment all around.

Vectors are still in play IIRC; quaternion rotations are an application of the rotation theorem.


On an unrelated note, thanks for that Feynman integral trick post.

i did not understand quaternions until I read Hamilton's original works. Maybe i just have a 19th century brain or something. but i found them delightfully free of modern gobbldeygook.


>Maybe i just have a 19th century brain or something. but i found them delightfully free of modern gobbldeygook.

No, you can blame Bourbaki for that. People such as V. Arnold decried the way mathematics is now presented[1].

It was from Hamilton's book that I learned what the word vector means and why it's used. It simply means carrier (as in malaria vector that you might heard from biologists) - and carries the space, by a translation!

Such lucidity is absent from all linear algebra books I've seen.

We need to go back to the presentation style of 19th century, where not only the result, but the thought process is presented. Today's papers look like they are written for formal verification systems.


I share the same sentiment. The state of modern mathematics exposition is well summarised IMO by "they like the logically most efficient path; that rarely coincides with the pedagogically most efficient one", to paraphrase.

I also have an anecdote similar to yours regarding Hamilton and vectors: I think it was in one of the "Analysis Infinitorum" (Euler) that I found the natural logarithm being called the "hyperbolic logarithm" (it was the English translation of course). When I was a kid I was perplexed by how everyone seemed to insist on using e as the base of their logarithms and exponentials -- why the hell? Reading Euler's treatment of the subject would have been very satisfying then.

If you liked Hamilton, you should try Lanczos' Varational Principle of Mechanics.

This reasonates with me so much. Thanks for the tip.

If you code a system with 3D rotations coded as Euler angles, you can align two axises and lose a degree of freedom. This is gimbal lock.

Using unit quaternions (aka versors) to code the rotation instead, you cannot lose a degree of freedom. 4x4 matrices also solve this problem, but quaternions are more mathematically efficient. They also give very smooth interpolation for computer-assisted animations.

Also related to spinors in quantum physics and exists as a subalgebra of some conformal geometric algebras.

In addition to the other replies, check out Clifford algebras which generalize these types of numbers:


I've known about quaternions for many years and even used them to program rotation matrices for a few 3d projects but I've never really given them much thought beyond that. For some reason, your comment just completely changed the way I thought about them. To think that the i part of ijk is the same i = sqrt(-1) is mindblowing to me. I had never considered that there may be other "more imaginary" dimensions that were required to solve yet more complicated problems.

It even better ii = jj = kk = ijk = -1

You also have ij = k, jk = i, ki = j

What are these thing? I have no idea, nor did I know there are 4 more - and only 4 more.

One way to think of it is that numbers are mainly used to solve problems by studying the behavior of subsets of them that have specific properties. Like "prime numbers", "square-free numbers", and "solutions to this given equation", etc., and the interesting feature of numbers in general is that they can embed multiple such concepts at the same time. This is helpful because you can find numbers that share multiple properties in order to bridge between otherwise unrelated problems -- it makes it easier to compose solutions of simpler problems into solutions for larger, more complex problems.

From that point of view, quaternions and octonions solve problems in the same way more common numbers do, they just have different sets of properties and so help solving a different set of problems. Of course, they can only do this if we study them well enough to have a sufficiently large suite of concepts and relations in our toolbox.


IDK much about octonions, but quaternions have similar algebraic properties to rotation groups and can be used to encode rotations in 3D graphics.

Not sure about octonions. But quaternions are somewhat used in mechanics, specifically for dealing with rotations. The usage of quaternions is computationally simpler for describing arbitrary rotations in 3d dimensions.

> Imaginary numbers are needed to take the square root of a negative

No, they are not. That problem is not well defined. A symbol is not a solution.

Complex numbers are needed to solve the polynoms of a higher degree. Eg. x^4+x^2=-1 can be simplified to y^2+y=-1 with y=x^2 which wouldn't have a solu

For the curious, John Baez has written about octonions pretty extensively: http://math.ucr.edu/home/baez/octonions/

happy to see John Baez fans on here, i like his series on his favorite numbers https://www.youtube.com/watch?v=vzjbRhYjELo

Also, Joan Baez's cousin. And her father, Albert Baez, explaining elliptictal orbits: http://www.youtube.com/watch?v=_qpKhCa64Eo

> There the game stops. Proof surfaced in 1898 that the reals, complex numbers, quaternions and octonions are the only kinds of numbers that can be added, subtracted, multiplied and divided.

1. I think they meant "the only kinds of numbers constructed in this way". 2. Sedenions can still be added, multiplied, subtracted and divided. it's just that multiplication and division lose most of their useful properties. With octonions you've already lost associativity and commutativity, though.

>it's just that multiplication and division lose most of their useful properties.

Specifically they lose the property of not having zero divisors.

There exists sedonions a,b != 0 such that ab = 0

Another (related) property that fails is that inverses stop being useful for cancellation. Inverses still exist, for every p there's a q with pq = qp = 1, but if you've got an equation ap = b you can't cancel to get a = bq, because we don't have associativity. The left hand side (ap)q doesn't equal a(pq), so you can't reduce it to a.

Of course associativity doesn't hold in the octonions either, but it holds just enough for cancellation to work.

Yeah, octonion multiplication is alternative.

No, power associativity is never lost, after sedenions the properties remain the same.

They don't remain exactly the same. The answers to this (https://math.stackexchange.com/questions/641809/what-specifi...) question provide some interesting starting-points.

That happens in general for matrices too.

Yes, and this is a bit obvious, but reals, complex numbers, split complex numbers, quaternions, octonions, sedenions, can all be represented as matrices of the appropriate form.

That's at most sort-of-true. It's not possible to represent octonions by matrices of numbers in such a way that multiplication of matrices corresponds to multiplication of octonions, because matrix multiplication is associative and octonion multiplication isn't.

can't be true, unless there is some special matrix multiplication rule - afaik, standard matrix multiplication is associative

An n-dimensional matrix of octonions

Being composition algebras makes R,C,Q and O more interesting than others:

N(xy) = N(x)N(y) - N is called norm.

Without this property, you have zero divisors.

edit: throwawaymath uses better notation: |xy| = |x| • |y|

What type of thing is N? What you wrote doesn't seem to make sense if N is just a constant, but I don't see how it makes sense if N is a function? Or maybe you didn't mean multiplication?

E.g. N = 3, x = 2, y = 3

N(xy) = 3(2 . 3) = 18

N(x)N(y) - N = 3(2) . 3(3) - 3 = 51, which is not 18

That's not a subtraction sign, it's a hyphen. N is the norm of x, denoted by |x|. Technically the norm is a scalar-valued function applied to a vector, hence the functional notation N(xy) = N(x) • N(y).

So to be explicit, they're saying |xy| = |x| • |y| implies you cannot have a 0 divisor.

N here is a function, not a constant. I was confused at first, but I think the "-N" in that post wasn't intended to mean "minus N" but rather ", where N is the function called...".

The "norm" of a number is more or less its absolute value: its size, it's magnitude. So "1" has norm 1, but "-1" also has norm 1, as does "i", and "-5" or "5i" or "4-3i" all have norm 5. So these four mathematical structure (R,C,Q, and O) all have the property that the norm of a product is equal to the product of the norms. Things get really obnoxious (or at least really unfamiliar) if you don't have that property.

Edit: Sorry for the repetition! I've got to remember to reload these pages before replying.

is there a proof that real numbers can be added?

As far as I know, addition is something you define (an axiom) in order to construct the real (or other) numbers.


I think the following quote is the QED for academia ruining any chance at actual research:

“What I had was an out-of-control intuition that these algebras were key to understanding particle physics, and I was willing to follow this intuition off a cliff if need be. Some might say I did.”

It's my understanding that most of Einstein's theory was the product of intuition, backed up after-the-fact with mathematics and experimentation. Intuition isn't a bad compass, as long as you can set it aside if reality measurably contradicts it. In fairness, Einstein never accepted quantum mechanics because they flew in the face of his intuition, but it still got him pretty far.

> Einstein never accepted quantum mechanics

Wait? Since when did Einstein not accept Quantum Mechanics? He won the Nobel Prize for his work in discovering an important part of Quantum Mechanics.

Einstein rejected the Copenhagen Interpretation of Quantum Mechanics. That's not the same thing as rejecting Quantum Mechanics.

I've seen no claims that he would object to either the Everett or Bohm Interpretations of QM. (But then again, he died before they were invented.)

Einstein, by the way, is far from the only critic of the Copenhagen Interpretation. Quite a few physicists are drawn to the Everett Interpretation instead, for instance.

I'm replying to myself here. I did a little research and discovered that Einstein was aware Bohm's interpretation, and Einstein seemed to think that it missed the mark. Einstein did die, however, two years before the publication of the Everett Interpretation, and I've seen no evidence that Einstein received any early information on that seminal Interpretation.

It would be very interesting to know how Einstein would have received the Everett Interpretation.

This is what Einstein had to say about Bohm's Interpretation, in a letter that Einstein wrote to Bohm about six months before he died:

"In the last few years several attempts have been made to complete quantum theory as you have also attempted. But it seems to me that we are still quite remote from a satisfactory solution to the problem. I myself have tried to approach this by generalising the law of gravitation. But I must confess that I was not able to find a way to explain the atomistic character of nature. My opinion is that if an objective description through the field as an elementary concept is not possible, than one has to find a possibility to avoid the continuum (together with space and time) altogether. But I have not the slightest idea what kind of elementary concepts could be used in such a theory."

From my reading of this, it seems that Einstein's greatest concern here is that GR and QM had not yet been unified.

There is a sense in which no competent scientist can fully accept either QM or GR, despite their incredible accuracy in making predictions, because we know that they are incompatible with each other, and consequently, both wrong or incomplete in some very important manner.

Well, he never accepted non-locality. And it turns out, quantum mechanics is indeed non-local.

This is a common misconception. (The Copenhagen interpretation of) QM is a local theory, in the sense that there's no faster-than-light travel of information.

Otherwise you'd have to take into account what's happening at the other end of the universe in order to successfully predict the outcome of experiments. (Which I hear is what the people working on Bohmian mechanics struggle with.)

It's been a while since I studied this, and I know that some Copenhagenist try to wiggle their way out of this, but if you consider the probability waves in the Copenhagen interpretation to be "real" and "measurement" to cause their collapse, book-keeping information by the universe would seem to have to travel at faster than the speed of light, IIRC.

On the other hand, it's certainly been proven that this cannot be used to transmit information that you might like to at faster than the speed of light. So causality is preserved in any case.

In the Everett interpretation, everything is local.

This is very false. Einstein's reputation was built on explaining known phenomena such as the photoelectric effect and Brownian motion. Special relativity was heavily motivated by a pile of puzzling evidence and a bunch of existing mathematics.

> This is very false.

But didn't Einstein write the following in his own book?

“I believe in intuition and inspiration. … At times I feel certain I am right while not knowing the reason. When the eclipse of 1919 confirmed my intuition, I was not in the least surprised. In fact I would have been astonished that it turned out otherwise. Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution. It is, strictly speaking, a real factor in scientific research.”

And he funded his studies, initially, as a hobby alongside a job. We’ve not progressed very far if ground breaking research almost requires a person to find a novel study path.

What do you mean by "novel study path" here? Funding yourself with money received from a job?

On my first reading I took "novel study path" to mean new research techniques or something along those lines, which seems obvious and is not likely what you meant.

Either applies really.

It's not false, w.r.t. general relativity

And you still need a good amount of creativity to come up with special relativity and to a certain extent, with the photoelectric effect (though we could say that was the 'easy' one)

But yes, the puzzling results and existing mathematics were part of it. But there is a reason we hear much more about Einstein than Lorenz or Michelson-Moreley

Special relativity was heavily motivated by a pile of puzzling evidence

If you mean the Michelson–Morley experiment (apparent zero velocity of earth with respect to the ether), apparently Einstein himself claimed that wasn’t a motivation.

I believe it was all about resolving the long-standing "action at a distance" question raised by Newtonian physics (Newton himself noted that this was philosophically disturbing but didn’t venture an answer). Intuition and philosophy, rather than experimental evidence and mathematics.

At the beginning of his paper he cites the relativistic nature of Maxwell's equations, and "unsuccessful attempts to discover any motion of the earth relatively to the 'light medium' ".


Gödel was motivated by convictions about meta-mathematics, motivated by his Catholic fatih. Perhaps a purely mechanistic world view would have eventually generated the same result, e.g. Turing's halting problem, but who knows.

Gödel's background was Lutheran, not Catholic, and I don't think he was particularly orthodox. (I have no idea how his religious convictions influenced his metamathematical ones; you might well be right about that.)

Thanks for the correction. I'm sure I've heard biographers (Jessica Goldestein perhaps?) describe his faith as having had a strong influence, but I must have misconstrued it quite a bit as memory faded. You're right, he was raised Lutheran. But his father was Catholic.

Gödel spacetimes to fuck with your intuitions.

Thought experiments, not intuition. Thought experiments are kind of theoretical physics: Making an assumption ("c is the universal speed limit"), and thinking about the implications ("addition of velocity cannot be linear"), even though available experimental tools are not powerful enough to demonstrate the implications in a way observable to the though-experimenter.

As a current PhD student, I'd say the vast majority of researchers aren't anywhere near that enthusiastic about their research.

Many marriages fail in the time it takes to complete a phd. If academia can’t find a way to fund people with actual interest then somethings wrong.

many marriage fail while trying to get your PhD, imo

Yet they have no trouble raising tuition every year.

Eh, there's a balance to be struck, and that's obvious to anyone that's seriously thought about this issue.

There's no shortage of specious ideas. From outright quackery, the naive, or the simply misguided, there are countless ways to waste a lot of time and money on ideas that will lead nowhere. Especially when all you're guided by is intuition.

That's not to say that intuition isn't valuable; rather, intuition shouldn't be the only guiding principle. If an idea is 'real' in the sense that it will produce substantive research findings, it's reasonable to expect some kind of evidence for this. Maybe you're trying to show that A -> D. Well, showing a bit about B or C can go a long way to convincing people it's worth looking at the link between A and D.

The counter-argument is that it's possible there is not B or C, that a large and courageous leap is required to get to D. That's indeed possible, and arguably has been demonstrated with some famous results. But it doesn't follow that you simply must take everyone's giant leaps seriously and give them funding.

Ultimately, we go by proxies. If A -> D is required, well, at least show us that you got from A to B in some other issue. Give us evidence, however imprecise, that you might be that 1 in a 1,000 (or 1,000,000?) leaper that lands somewhere successfully.

So yes, there's an art to knowing when an idea is 'ripe' for a wider audience, and when it's time to stake your career on it. There's no reason you can't work on something in the background or during a sabbatical. The notion that 'academic incrementalism' is so destructive is a tenuous one, and certainly isn't being demonstrated here. To argue this is to say that Dixon would have been successful, if only he had gotten this or that position. Yet is appears that his approach was the issue, not financial or departmental support. I'd argue that the academic system correctly identified that his idea wasn't ready yet. Now that some demonstrable progress is being made, even if it falls far short of 'D', it's attracting attention and enthusiasm.

Finally, the notion that academia produces no actual research is farcical.

It's rather amusing that the author assumes that non-associtiave objects are "weird" for physicists (or at least that was my reading), since the velocity addition formula is in general non-associative and that has been extensively studied.

(I remember three separate occasions in my undergrad particle physics class where we actually went through all the calculations involved with the velocity addition formula and finally saw that SOL was wacky).


I wonder if that is uncommon or if the article is just a bit ungenerous towards our understanding of non-associative objects.

To be fair, one never talks of relativistic "velocities" beyond undergrad physics. It is much easier to talk about Lorentz "boosts", and as members of a Lie Group, they are associative, though not commutative.

This. It's better to just make the full leap into spacetime geometry (4-vectors and 4-tensors) than keep trying to slice it into annoying 3-vectors.

The other thing that contains non-associative elements is logic. Due to the Curry-Howard correspondence, that means functional programming too.

So this quote about mathematicians seems a bit off. Baez is talking about mathematical physicists, I guess, not logicians and computer scientists. I know he is not a functional programmer from various online remarks he has made. So maybe it's more of a personal perspective that we shouldn't take too seriously.

'“Nonassociative things are strongly disliked by mathematicians,” said John Baez, a mathematical physicist at the University of California, Riverside, and a leading expert on the octonions. “Because while it’s very easy to imagine noncommutative situations — putting on shoes then socks is different from socks then shoes — it’s very difficult to think of a nonassociative situation.” If, instead of putting on socks then shoes, you first put your socks into your shoes, technically you should still then be able to put your feet into both and get the same result. “The parentheses feel artificial.”'

Not to mention operator calculus. Operators do not, in general, associate. Feynman developed his own notation for operator calculus.

How do operators in general fail to associate? Usually the "multiplication" operation for operators is function application, which is a paradigmatic example of a thing that is associative.

Operator calculus is much more general than multiplication. It can involve exponentiation, differentiation, integration, etc.

Under what conditions is velocity addition not associative? In 1d they are associative right, but cant think in 3d

I’ve been exploring this idea as well however I have a hunch it’s not actually octonions but dual quaternions as they are the perfect formalism for representing 3d movement over time.

And to add to that, the are a Lie group I.e. they are anticommutative I.e. AB=-BA.

I’ve also been exploring this relationship between dual quaternions and linear logic. It’s pretty wild. I’m curious if anyone has any opinions on this.

Incidentally, I've also been thinking about dual quaternions for modeling rigid objects in 3d. For anyone who's curious about the topic, these seem to be a decent set of notes: http://web.cs.iastate.edu/~cs577/handouts/dual-quaternion.pd...

Linear logic always reminded me of the no-cloning property in quantum physics, where a resource can only be used once -- no copying for multiple use.

Can you elaborate on the sense in which you think of linear logic as "squaring to zero"? UPDATE: A little googling throws up this: http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/...

Good point about no copy.

So this is kinda roundabout however there’s homotopy type theory which states that topology=logic=type theory. For a second let's say that theres a fourth "element" in this equation and that's quaternions (this isn't a stretch I can provide some papers that hint at this).

Linear logic is classical logic with the added reference to time without explicitly referencing time.

Dual quaternions all about modeling deformation over time.

Therefore, I would conclude that linear logic and dual quaternions are related.

Do you have an understanding of Clifford algebras? I’ve read a little about them here on hn, and they seem quite powerful. I don’t understand them enough to know if they could also be an appropriate abstraction.

Yes! Dual quaternions are a Cl(2,0,1) Clifford algebra. Good ol Cliffy invented them too.


Check the guy on the right getting deformed like an idiot. Compare it with the smooth criminal on the right.

> http://www.chinedufn.com/dual-quaternion-shader-explained/

Offset curve deformations blow dual quaternion's out of the water for rigging!

Tell me more. I briefly looked it up and I do agree it does look somewhat nicer. However how's the performance.

Fast enough for use in live character rigs in film running on the CPU -- I know the technique is used at Sony Imageworks and at Dreamworks. I can't comment on whether the method is suitable for live rigs in games, or how it would translate the GPU.

There is a process where you put in a space (number of dimensions) and a metric (Euclidean, for example) and produce a Clifford algebra. This is also sometimes called “geometric algebra”, although there are a lot of sensationalist posts using this name on the internet.

This Clifford algebra has an odd and even part, and the even part is an algebra in its own right, and is very useful for representing “rotations” in the space.

With the Euclidean metric, with one input dimension, the even Clifford algebra is just the reals. With two dimensions, you get the complex numbers. With three, the quaternions. However, since a Clifford algebra is always associative, you don’t get the octonions in this way.

Finally, if you put in four dimensions and the minkowski metric, the even Clifford algebra construction gives you a lovely algebra representing spatial rotations in three of the dimensions, and “boosts” along the time direction - exactly what you need to do calculations in special relativity.

I am working my way through Understanding Geometric Algebra by Kenichi Kanatani up to the Grassmann Algebra chapter, so I can then pick up John Browne's Grassmann Algebra Volume 1 Foundations.

Kenichi's book works from basic vector calculus, Hamilton, Grassmann and Clifford. Grassmann was not even a mathematician by trade, but a school teacher and a linguist. He translated the Rig Veda. His big mathematical work did not get the attention when written as it did posthumously. I know things with cool names like differntial forms and fiber bundle space [1]. I'm becoming quite the Grassmann fanboy!

I am thinking if I can learn these algebras (Grassmann, Clifford, etc...) I will be able to deal with more abstract thinking in geometry and handle it more concisely and logically. Almost like when I use J programming language for math! [2]

I think Quarternions don't hold after 3 dimensions, so what about Octonions?

  [1]  https://galileospendulum.org/2012/04/26/everything-is-geometrical-hermann-grassmanns-algebra/
  [2] jsoftware.com

one of the most intriguing/funny things about dual numbers (and dual quaternions in particular) is how they encode the derivative of the real part into the dual part, and how everything plays out nicely algebraically.

this could be a nice implementation trick for an automatic differentiation framework.

This reminds me a bit of atomic orbitals and how they're based on spherical harmonics. I was curious why there were e.g., eight electron 'sockets' in the second shell. Eight seemed like a very arbitrary number to me and my high school chemistry teacher's inability to explain at the time did it's share to put me off chemistry.

Many years later I remembered my old question and started looking it up. It turns out that eight is the sum of 1+3+3+1 perhaps similarly to what's in the article.

Spherical harmonics ends up giving rise to a three dimensional 'overtone' series (borrowing from my understanding of music theory). In the first order there's only one mode of vibration. In the second order there are three additional modes. The summands above are something like positive and negative degrees of freedom for each mode in the second shell.

Here's a diagram of what the modes look like in each order:


...and here's an animation of a sphere undergoing the differing modes of vibration:


If I understand correctly, the math related to atomic orbitals can be described with 3 dimensions of space: x, y and z plus one more orthogonal dimension of frequency/time which would mean quaternions would be most directly applicable?

"Proof surfaced in 1898 that the reals, complex numbers, quaternions and octonions are the only kinds of numbers that can be added, subtracted, multiplied and divided. "

Not true. Any field (in the algebra sense) has these properties. [1]

Quaternions and octnonions also have weirder properties: quaternions are non-commutatve (jk=-kj) and octnonions are non-associative: a(bc) != (ab)c.

I think the article meant these are the only Euclidean Hurwitz algebras [2], which is a far cry from the claim.

[1] https://en.wikipedia.org/wiki/Field_(mathematics) [2] https://en.wikipedia.org/wiki/Hurwitz%27s_theorem_(compositi...

It's in the article towards the end, but these are the only kinds of ... "over real numbers," which constraint Furey believes may be only an approximation.

Even when adding "over the real number", the claim in the article is wrong: R(x), the algebra of rational functions of one variable with real coefficients is a field, and a module over R containing a isomorphic image of R as a subfield.

You really do need the additional constraint of the Hurwitz form to restrict the possibilities to R, C, H, O ...

(Of course this has nothing to do with the work of Furey - I'm just seconding that the claim in the article is incomplete and inexact as worded.)

Example: rational numbers

The rationals are a subfield of the reals.

Yeah, but they are still a counterexample to the stated claim. It could be that only ℝ as a whole forms a field.

> “Because while it’s very easy to imagine noncommutative situations — putting on shoes then socks is different from socks then shoes — it’s very difficult to think of a nonassociative situation.” If, instead of putting on socks then shoes, you first put your socks into your shoes, technically you should still then be able to put your feet into both and get the same result. “The parentheses feel artificial.”

> The octonions’ seemingly unphysical nonassociativity has crippled many physicists’ efforts to exploit them.

I'd say nonassociativity is extremely physical property. And example with shoes and socks, if anything, shows just that. In idealised mathematical model, if you put socks inside shoes and then insert feet into socks you could end up with same thing as if you put feet in socks an then both in shoes. But in real world or just accurate model of physical world you'll end up with very different result.

I feel that nonassociativity is what's missing to take into account time which unidirrctuonality is suspiciously missing from almost all of the physics.

If you're looking for a deep look at time -- a Total Perspective Vortex, I'd recommend "Spontaneous Inflation and the Origin of the Arrow of Time".


I don't know. Unidirectionality of time just feels like something that should be everywhere. Nonassociativity of basic math that governs all of matter seems for me to be a better source for it than inflation or entropy or gravity or any one specific thing.

How does this tie into E8? Does this work validate Garret Lissi or is it parallel?


I remember hearing Garret Lisi's TED talk on the E8 ToE (link below) years ago and hoping that components of the theory would eventually come to light through advances in particle physics research. Has anyone heard anything supportive or to the contrary since?

TED Talk: https://www.ted.com/talks/garrett_lisi_on_his_theory_of_ever...

Paper: https://arxiv.org/abs/0711.0770

Edit I remember seeing an update from him two years ago where he addresses some of the criticism: https://www.quora.com/What-is-the-status-of-Garrett-Lisis-E8...

There's been some criticism of his work, back when it first came out.

I'm fuzzy on the details now, but the criticism was along the lines of "mixing things together that don't make sense".

I remember something about being unable to do any CP violation.

Why are complex numbers not called duonions then?

And why are the real numbers not called "onions"?

Obviously due to potential confusion with the zero-dimensional numbering system, "0nions".

Hehe, well we could have unions & oneions. For some reason as I'm reading about octonions, I keep picturing 8-cloved garlic.

I went upstairs to find the machine where I was already logged in so I could hunt down and upvote this comment.

> Why are complex numbers not called duonions then?

Why are pairs not commonly called duples?

I think that maybe if you lock a mathematical physicist in a box forever with an x-TeV collider without letting them upgrade it, they will eventually find a theory that hits all of the datapoints and depending on their philosophical weakness then declare it "final."

Surely what we're looking for is the simplest theory among the theories that hit all the datapoints. That's the theory that's most likely to be the "Theory of Everything" and therefore to continue working when we upgrade our collider.

Isn't it strange how the probabilistic motivation of Occam's razor loses its foundation when applied to a Theory of Everything? Simple is beautiful, that's all I've got.

Why would the simplest theory be the most likely to be true?

I think there's a little more foundation to it that just "Occam said it". I think the point is that the simplest theory is a) a lower bound and b) most valuably falsifiable.

Take a theory, add a bit about a Unicorn. or a Dragon. Now you have a more complicated theory. You can arbitrarily contrive a more complicated solution.

But the simplest? You've found a local minima. And because it's more stable you can compare it to competing theories without trivial refutation. see: the 3-5 theories of dark matter.

Occam's razor is not about the simplest theory being true, but about how from a simple theory you can create infinite more just adding an element.

If you have infinite possible theories, the only logical thing to do is to stay with the most simple theory until it can't explain the world anymore, then you choose the next one more simple.

There is also, I think, a theoretical information argument: when creating a theory you are trying to compress all the observations. The better the compression, the better the theory.

I don't understand why people down-vote you, by the way.

That's a big topic, and the definition of simple is subtle. I recommend David Deutsch's book _The Beginning of Infinity_ for an accessible introduction to what makes one theory better than another when they both fit the observations.

One theory might be better than another when they both fit the observations, but you can't say which one is true-er. For example, were Newton been presented with GR but only allowed his contemporary evidence, he should not conclude that classical gravitation is more true than GR on the grounds that he doesn't have any evidence for the additional complexity.

I think you are wrong. If he is following the scientific method, and he doesn't have evidence, he should 'conclude' that.

The key is that in science, you never 'conclude' (in the sense of finalize) anything. Everything is temporal until new evidence deny your current understanding.

Volumes have been written about this topic (principle of least assumption, parsimonious representation, Occam's razor, etc.). It underlies all statistical learning theory and information theory of which science is a special case.

Less edge cases mean it's less likely to break if we discover a new particle or something else unexpected

I guess that's a good overview. It's related to the phenomenon of overfitting in machine learning: you can always easily find a sufficiently complex (or complicated, large if you prefer) theory fitting all data points. Because this theory simply encodes each observed case (including progressive sophistications of encoding), you naturally expect it to fail on unobserved cases -- it makes no effort at generalization. The simpler theories have a greater chance of generalization, they're more likely to be the "true" mechanism of the process that doesn't simply encode "edge cases" as you cited, and thus are more likely to also work on unobserved data.

Honestly I haven't seen attempts at making this process more rigorous, when applied to physics. There's the large corpus of machine learning study which provides concrete results and even concrete comparison tools, but the times I've asked a physicist I've been dismissed; while it seems incredibly valuable in the face of lack or large cost of experimental data, which is quite relevant today.

> Honestly I haven't seen attempts at making this process more rigorous, when applied to physics.

Marcus Hutter has expressed this idea quite well ( https://arxiv.org/pdf/0912.5434 ) arguing that (a) smaller/simpler theories have more predictive power and (b) the "size" of a theory includes the complexity of its equations and the parameters needed to specify some result. The latter is important because some theories trade off between these two: e.g. a multiverse theory might have simple equations ("every possibility happens somewhere") but require very precise "coordinates" to pin-point the actual possibility that we observe.

Not sure if other physicists know of or take it seriously though.

Interesting. Apparently he needs to assume a particular multiverse theory to prove it. While I don't object to those on principle, I don't believe they're needed to prove heuristic, good enough versions of Ockham’s Razor that work in the real world (albeit without guarantees), based on the arguments outlined on the previous comments.

> The latter is important because some theories trade off between these two: e.g. a multiverse theory might have simple equations ("every possibility happens somewhere") but require very precise "coordinates" to pin-point the actual possibility that we observe.

I think this is an important observation that's quite obvious for ML researchers et al but again seems to escape current physics discussions. An example is the endless drama about "Fine tuning": if your new theory requires many less bits for equation description, that it requires fine tuning is irrelevant as long as the additional model parameter precision uses less bits -- then it should be the preferred candidate.

W.r.t. [computational] multiverse theories (and variants such as Tegmark's MUH, Schimidhuber's, and others), I do believe they're an inevitable progression of physics/philosophy. I just think it's a bit pretentious to have any certain about a particular flavor. I feel there's still much philosophical and mathematical ground to be covered; it tests the limits of our imagination. It seriously feels like a very important step for humanity at large though -- finally approaching metaphysical theories that actually make sense, and explain the basis of much about humanity, existence, ethics, etc. I think it's an important void to be filled after the decline of religion, hopefully in coonjunction with the spread of humanism.

Decrease the number of variables and conditions to any physical system; say to something binary and you end up with a 50/50 argument, which means at worst means you can always fall back on trial and error, and at best you might discover something you didn't see before. Simple theories are better than "holistic" theories, because they act as a starting point for many more testable simple theories. Knowledge at it's most primitive form is ultimately somewhat binary, completely testable and scalable.

This is a good question, and one that I don't have a complete or rigorous answer to. But I can describe my intuition:

A theory is just a list of assertions. In the absence of any evidence, each assertion is just as likely to be true as false. Therefore the probability of a theory with n assertions is 2^-n. So the more assertions there are, the less probability.

Scientific theories are not likely to be true: they are either true enough to be taken seriously or false enough to be dead. Accumulation of evidence and knowledge makes scientific theories descend a ladder of wrongness that goes from unthinkable, to utterly ridiculous, to wrong but respectably clever, to somewhat grounded in reality, to good enough approximations for some purposes, to best in class but not perfect, to positively agreeing with all available evidence.

Given equally true theories scientists look to other properties to establish theory quality, and simplicity is a philosophically important one: it represents the belief that natural laws should be as simple and elegant as evidence allows them to be.

Well they will and they won't, until you open the box

I have to disagree that quaternions underlie Special Relativity. Although special relativity does use 4-vectors, those aren't quaternions.

I remember going down that path when I was an undergraduate. Fortunately I had a very experienced theorist to hand who explained that, yes, people did try that, but stopped bothering because it doesn't generalize to general relativity, and there was no point keeping two mathematical toolboxes around when you could have one.

Clifford algebras are great for both SR and GR and have all the properties that one likes in quaternions (and more)

There are equivalent formulations of SR using quaternions.

But do they generalize to GR?

No, which is why the quaternionic approach isn't generally presented in textbooks. I think taylor and wheeler's SR book touches on it as an aside.

As a math graduate, octonions (and quaternions) at least to our group, was something that got discussed once or twice and not much after that. However, I would not call them weird or unusual, at least not more weird than something like a near-ring which similarly simply drops one of the common assumptions for rings (I'm not sure which, actually). As mentioned in some of the comments, nonassociative fields do get studied. Studying not necessarily commutative structures such as (all) groups is in fact an even much more common thing to do.

I would say that studying octonions exclusively would be something I personally would avoid, as I would rather try to study the four structures (reals to octnonions) together, either more generally (e.g., group theory and ring theory) or more abstractly (e.g., as members of categories) and form an opinion on whether I think octonions in particular are useful for the questions that I want to ask.

That is not to say that the research here is not interesting, but it is difficult to judge that from "popular" mathematics articles. I got the impression that the author of the article places a much higher priority on the pictures accompanying the post.

I remember that I found it interesting that studying the four dimensional spacetime bears more fruit than stopping at three space dimensions, and that at the same time from complex numbers the next structure ends up also having four dimensions (i.e., being modelled by 4-tuples). However, apart from being interesting in this narrow sense, I do not know whether this suggests any creative yet precise mathematical questions.

Why do I feel like the octonions are likely related to the spin networks of LQG (loop quantum gravity). The non-associative property is probably key to explaining how change in space is propagated through the spin networks. And thus why time is directional..

In the approach from the article it seems as you are picking a mathematical structure in the middle of nowhere with the universe faintly visible at the horizon and then you start wandering around hoping that you will stumble across a path leading to the horizon.

But there don't seem to be many reason to believe that such a path exists, there are countless mathematical objects you could pick as a starting point and almost none of them have a path leading to the universe at the horizon. And even even of those with path going roughly into the right direction many will take a turn before you arrive at your destination.

So it seems much more promising to me to start at the universe and the mathematical structures describing our observations in a straight forward way and then explore from there the surrounding mathematical structures to see if they are a better fit, suggest new ideas, or whatever.

> In the approach from the article it seems as you are picking a mathematical structure in the middle of nowhere

Quaternions were very popular way of expressing the "classical" physics around the 19th century (and the vector algebra we know today is in some ways just a derivative of quaternion algebra). Complex numbers are extremely useful in many fields even today. It's hardly in the middle of nowhere.

But the article is not really talking about quaternions - which are surly a useful tool, probably best known for the nice way in which they can describe rotations - but about R⊗C⊗H⊗O. And it's at the very least not obvious that this thing is anywhere close to where they journey is hopped to lead to.

Octonions are to quaternions as are quaternions to complex numbers (and complex numbers to reals). This is called "Cayley construction" iirc.

Sure, but the structure the article is about is the tensor product of the four algebras which is an algebra with 64 real dimensions.

Yep complex numbers get used extensively for doing calculations with electronics when doing analysis.


Much closer to the topic of the article are quantum mechanics and quantum field theory which make extensive use of complex numbers. But that really does not say much about the usefulness of the structures discussed in the article for modeling the universe, complex numbers are just a rather simple and general tool and can therefore be used for a lot of applications.

The article briefly alluded to this, that C, R, H are both already seen in features of the universe, so it's not "middle of nowhere."

The really interesting part in all this, is that we're still reduced to bouncing values off of imaginary numbers to obtain accurate readings.

We toss our known quantities into a void, anticipating that if some impossible, imaginary thing really can fill the gap, and if or when it does, we'll catch the rebound off of it, and the rest of the universe proceeds predictably.

Somehow, we're always put into a position where we have to close our eyes, fly blind for some undisclosed intervening moment of unspecified length, and when we open our eyes again, we're grounded by familiar territory again.

It really is kind of stultifying.

I never liked the name complex numbers. Something like dual numbers would have been much nicer and more descriptive.

By the way any idea why they don't mention the sedenion numbers? https://en.m.wikipedia.org/wiki/Sedenion

Pachimari, of course, is the official mascot of octonions everywhere: https://kotaku.com/overwatchs-little-onion-octopus-has-becom...

> They “imagined that the next bit of progress will come from some new pieces being dropped onto the table, [rather than] from thinking harder about the pieces we already have,”

When the best minds we have divide into two camps, one saying that (a) "with X and Y we don't have enough information to solve Z" and the other saying (b) we do but we need to think harder, the first camp builds a particle collider, the other creates what, string theory? Aren't we doing "fuzzy science" here?

It seems that the experts in one of the camps should go back and retrace their steps because somewhere along the line they made an assumption based on some data that they (I assume) forgot to encode into their equations and now they have trouble taking it to the next level. Why is it not clear to us that eiher a or b is true?

So, did Furey get a faculty job or can she be viewed with her accordion somewhere?

FWIW, I'll certainly tell my department chair in New Orleans about her if they're doing another faculty search soon. Maybe she can have her cake and eat it too that way.

I wonder how many hardcore Harry Potter fans are smiling at the 1st photograph?

This article reminded me of wave function being a complex-valued function! Does this have anything to do with octonion, nature's symmetries/invariants or the standard model?

Are we getting to the point where physics actually isn't showing any more useful properties for us to leverage?

Isn't that kind of what it would mean if it turned out octonions were at its core?

A layman's quest to understand wtf this is...

>In mathematics, the octonions are a normed division algebra over the real numbers .

wtf is a normed division algebra??

>In mathematics, Hurwitz's theorem is a theorem [...] solving the Hurwitz problem for finite-dimensional unital real non-associative algebras endowed with a positive-definite quadratic form.

... right.

I have the same problem when I try to understand anything statistics related, I get hit by a barrage of unknown words and my brain just melts.

Is there any place that explains mathematical concepts in ... different ways?

Those are good for supplementary exposition (especially 3Blue1Brown), but they're not suitable for learning on their own. They're also a bit of a hodge podge beyond calculus and linear algebra.

To understand what's going on here in any meaningful sense, the parent commenter should pick up an accessible undergrad textbook on abstract algebra. It doesn't have to be particularly advanced. Then they'll have a better foundation for understanding the algebraic features of various number systems.

> wtf is a normed division algebra??

An "algebra" is a set of thingies that have binary operations defined on them (i.e., operations that take two thingies as input and output another thingie). The thingies are usually referred to as "elements" of the algebra. Any algebra will at least have the "add" and "multiply" operations, but might also have others.

A "division algebra" is an algebra where the operations are add, subtract, multiply, and divide, and all of them work basically the same as they do with ordinary numbers (but they won't have all of the same properties--see below).

A "normed division algebra" is a division algebra with one additional operation called "norm", that takes as input any element of the algebra and outputs a real number, the "norm" of that element. On the ordinary real numbers, the norm is just the absolute value.

The "over the real numbers" part means that you can construct the normed division algebra by starting with real numbers; or, to put it another way, all of the elements of the algebra are "made of" real numbers. The simplest way of viewing this is as a repeated operation of pairing: complex numbers are made of pairs of real numbers, quaternions are made of pairs of complex numbers (hence sets of four real numbers, hence "quater"), and octonions are made of pairs of quaternions (hence sets of eight real numbers, hence "octo"). But each step in this series loses a key property. The reals are totally ordered; the complexes are not. The complexes are commutative under multiplication; the quaternions are not. The quaternions are associative under multiplication; the octonions are not.

> Is there any place that explains mathematical concepts in ... different ways?

Not really, because the only way people have found to really understand mathematical concepts is to build them up out of simpler mathematical concepts. That means you can't just encounter a complicated mathematical concept and expect to understand it if you don't understand all the simpler concepts it is built from. There are no shortcuts.

Well yeah, math books...

Jokes aside, i too think we need a new framework for divulgating math that actually tells what you need to know, without just handwaving at it, BUT without the amount of technical details of a mathematics class.

What do you know, i think this is Possible, too.. You can communicate a surprising amount of information if you use words properly.

Of course, since this has never been done except from basic maths, it would be quite a task to embark in, and one would only do it if it made economic sense..

> Jokes aside, i too think we need a new framework for divulgating math that actually tells what you need to know, without just handwaving at it, BUT without the amount of technical details of a mathematics class.

oh man, have i got bad news for you about the amount of technical detail present in math classes.

> only do it if it made economic sense

What if writing wikipedia articles was treated like publishing on prestigious journals? (somehow)

wikipedia already covers abstract algebra (the subject necessary for understanding the math in this article) rather well. what more do you want?

Something that covers abstract algebra rather well if you don't already know abstract algebra well enough to know what all the terms mean?

I mean, I suppose you could try to walk the graph of definitions of terms in Wikipedia to try to understand one of the articles. But I'm pretty sure that's not an acyclic graph, and it's not clear where to start in that process.

Thank Richard Borcherds (R.E.B.) for a lot of that.

I'm still hoping for an ELI5 version of Wikipedia (e.g. as a language option).

I don’t know if it’s a language option, but a Simple English version of Wikipedia does exist. https://en.wikipedia.org/wiki/Simple_English_Wikipedia

For more brain melting, here's a roadmap for learning React in 2018, React and associated tech is what, 20 years old?

Here's what that looks like: https://github.com/adam-golab/react-developer-roadmap

Then consider maths, how long and how many people have been contributing to that field?

What’s funny is most of those projects in the leaves probably won’t even be around in another 10 years.

Unfortunately there’s a lot of domain knowledge wrapped up in very short phrases. What you want to learn is abstract algebra, if you want to google for YouTube videos that break it down for you.

unital -> there is a multiplicative identity 1 such that 1.a = a for all a.

normed -> there is a norm. A way of saying how big an element is. In the reals |x| - x if x is positive and -x if x is negative. In the complex numbers |a+bi| = (a^2 + b^2)^(1/2)

Division -> Division (except possibly by zero) is always possible. That means for a and b not zero there exists c such that a=cb (c is a divided by b)

I could be wrong about these things since I'm quite rusty

I think division is normally framed rather as the existence of multiplicative inverses for all non-zero elements of the ring. That is, R is a division ring if

a) there exists a 1 in R

b) for all x in R, there exists y in R such that x * y = 1.

I'm pretty sure these are equivalent.


(==>) Let a = 1, and b in R be arbitrary. Then there exists a c such that b * c = 1. So inverses exist!

(<==) Let a and b in R be arbitrary. Then a = a * b^-1 * b, so c = a * b^-1. QED.

Can anyone expand on the current successful similarities between octions and the standard model in plain English?

If it goes: 1D, 2D, 4D, 8D, why does it stop at 8? Is there a hexadeconion?

They are called sedenions [1] and you can repeat the construction taking you one level up [2] infinitely.

[1] https://en.wikipedia.org/wiki/Sedenion

[2] https://en.wikipedia.org/wiki/Cayley–Dickson_construction

'Looking like an interplanetary traveler, with choppy silver bangs that taper to a point between piercing blue eyes' WTF? for a few minutes, I thought this was a serious article by a serious journalist. Apparently not.

If anyone can make octonions cool it would that physicist

Her name is Cohl Furey, that is SO COOL!

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact