Hacker News new | past | comments | ask | show | jobs | submit login
The exponential function is a miracle (plover.com)
225 points by ColinWright on Sept 13, 2019 | hide | past | favorite | 180 comments



This is such a weird thread. There are a bunch of people arguing about why their favorite math thing is more a miracle than someone else's favorite math thing.

Nothing is surprising if you've seen it before. Let's just let each other be excited about our favorite math, okay?

Euler's identity is the one that gets me:

    e^(i * pi) + 1 = 0
How can this be? The five fundamental constants are related!


> exp() = –1 ... How can this be?

The “plain English” expression of this formula is: a half turn rotation in a plane is equivalent to a reflection across the axis of rotation.

YMMV, but I have found that small children can understand this statement.


Talking about rotation as if that is obviously relevant completely skips the surprising and interesting part of this identity.

edit: Said another way, what you said is an explanation for what (cos(pi), sin(pi)) is. That exp(i*pi) has anything to do with sine or cosine is what makes the identity interesting, and your explanation says nothing about that.


Yeah and also that an exponential is a turning function, or that you can even take an number to an imaginary power in the first place, or that turning the right amount just happens to take the power of e (and not, say, 2)


It's a lot less surprising if you look at the functions as Taylor series, then you'd see that exponentiation is basically the same thing as cos + sin. This also explains "why e and not 2?", because the Taylor series for e, cos and sin are very nice by definition (e is defined to have trivial derivative, and radians are defined to make sin and cos have simple derivatives) while the Taylor series for exponentiation by 2 is not.


Agreed, but that is a lot of overhead to get to the insight.


> that turning the right amount just happens to take the power of e (and not, say, 2)

username90 already explained this one, but in my own rephrasing:

The premise is incorrect. It isn't that "turning just happens to take the power of e". e is defined to be the number for which this is true.


Those kind of « it’s defined like this » explanation has always confused me in math in general. there are two really different things :

- the mathematical concept was initialy created as a way to solve this particular problem, and so this should come at no surprise

And

- after centuries trying to define this concept, we found that the best way to define this concept is like that.

In your case, was « e » created to perform 2D rotations in the first place ?


We can look at history:

1. e was discovered in 1618

2. i was introduced around 1637

4. calculus was formalized by Newton in 1687

3. derivatives of sine and cosine were discovered in 1722

4. de Moivre's formula (cos x + i sin x) ^ n = cos nx + i sin nx was discovered in 1730

5. Eulers fornula e ^ ix = cos x + i sin x was discovered in 1748 by using Taylor series (requires their derivatives).

So we knew about e and i more than a century before Eulers fornula.

However we discovered it pretty soon after we started applying calculus to trigonometrical functions. Also we can note that it is a strictly more powerful than de Moivre's formula since it shows that we can easily add angles just by multiplying their complex representations, so it is not just a simplification of old knowledge.

So it seems like the discovery of Euler's fornula actually did help with working with rotations. However it was done during a time when we were still exploring the applications of Calculus so there were a lot of low hanging fruit like this to pick.

So to directly answer your question, e was not created to perform 2D rotations, instead we discovered that putting i inside of e makes 2D rotations simple. But the actual important identity you care about is that you can add angles by multiplying which is easy to prove using eulers fornula:

(cos x + i sin x) * (cos y + i sin y) = cos (x + y) + i sin (x + y)


> Eulers fornula e^ix = cos x + i sin x was discovered in 1748

Discovered by Roger Cotes in 1714, in IMO its most natural form, ix = log(cos x + i sin x)

> actual important identity you care about is that you can add angles by multiplying

Personally I think the multiplicative concept of rotations is the natural one, with rotations associated to points on a circle embedded in the plane, rather than associated to arclengths. The amazing thing is that we can compose rotations (naturally multiplicative) by first taking the logarithm of the rotations (a.k.a. angle measure) adding them, and then taking the inverse logarithm. The logarithm is a tool which turns multiplication into addition.

The protractor and the slide rule turn out to be more or less the same concept, which is why we can substitute the former for the latter in https://en.wikipedia.org/wiki/Prosthaphaeresis


If you explored the problem domain, starting with an understanding of trigonometry and calculus, it will naturally emerge as a result. It was not arbitrarily defined.

Imaginary numbers were not arbitrarily defined either, they precisely describe an actual phenomenon.

At a certain level, it is difficult to distinguish between "the reality we decided is true" for math vs. "the reality that must be true underneath". If you accept empiricism, such that you trust observations we make about the physical world, this problem becomes irrelevant to this discussion.


No, as username90 says, e was not created to perform rotations. The aspect of e that comes into play for the relation to sine and cosine is that the function e^x satisfies the differential equation df/dx = f. This makes the Taylor series convenient by not introducing factors of (ln K)^n in each term.

Thus, 2^ix = cis (x ln 2) = cos (x ln 2) + i sin (x ln 2). And since e is, by definition, the number that satisfies ln e = 1, e^ix can be stated more simply as cis x. And that identity, ln e = 1, is in fact the original motivating definition of e.


There are many valid ways to arrive at e as an interesting constant. A particular way of arriving at e as an interesting comment might provide a connection to something else (such as 2D rotation), but that doesn't mean it was created for that purpose.


Personally I think the cool part is that the complex logarithm takes the stereographic projection and turns it into the Mercator projection. That is, it conformally maps the plane onto an infinite two-ended cylinder.

The expression log(–1) = more or less just tells you how to normalize the coordinate system on the cylinder. If you wanted you could pick a somewhat different coordinate system. This one is conventional and often convenient though.


Complex numbers are about rotation. That’s the only odd insight in Euler’s equation. Once you get that, it all falls into place (and in fact starts to explain all the use of complex numbers in engineering).


> Complex numbers are about rotation.

Why?

(I'm not denying that it is, just pointing out that like the person I first replied to, you are jumping over the interesting part here and stating interesting conclusions like they are definitions.)


A complex number “is”† a quotient of two Euclidean vectors in a plane. That is, Z = u\v is the object which when you multiply it by one vector u, scales and rotates that into another vector v. We can write uZ = uu⁻¹v = v.

http://geocalc.clas.asu.edu/pdf/OerstedMedalLecture.pdf

https://geocalc.clas.asu.edu/pdf/GrassmannsVision.pdf

† More precisely, the algebra of complex numbers is structurally the same as the algebra of quotients of planar vectors.


As I said, "that's the only odd insight." I'm not claiming it's obvious, it's not. but once explained it makes sense and the rest falls in place. Here's a better resource than I can type up:

https://betterexplained.com/articles/a-visual-intuitive-guid...


Hm does that mean you could say the complex quantum probability is just a probability over rotations?


Well you add up all the rotation like stuff and square the result to get the probability. In the book QED Feynman describes it as adding little arrows to explain it to a lay audience. In physical terms it's probably most like adding waves and then the square of the amplitude or wave energy is the probability but not quite - no one really knows what the actual reality is.


Kinda sorta, imagine that probability has a direction. Interference is then about alignment of these directions.


what does electric engineering have to do with rotation? Generators perhaps run in circles, an inductor is a coil often enough, but a capacitor?


Electromagnetism concerns movement of what appear to be moving circles. If you roll a circle with a pen marker it draws a sine or cosine curve. If you try to draw the spiral of a light wave you get perpendicular sine and cosine curves. Light is electromagnetic radiation and in turn both electricity and magnetism are fundamentally connected forces.


there is no "spiral of a light wave" lol. If anything, I'd think of transversal movement. Ironically I cannot say that nobody has ever seen a photon. But nobody has ever seen an electron. There could be a model in which electrons move transversal (at mm/s), mediated by photons, but I have not looked that closely into it. It's not part of the usual theories.

if you attach a pen to a hoop and move the hoop along a wall, you do not get a wave, you get (edit: what looks like but isn't) sharp discontinuity where the pen touches the ground (https://google.com/search?q=cos%28x-sin%28x%29%29)


Circular polarized light is a spiral, at least in terms of how we normally draw the vector annotation.


You may say that AC has a lot to do with rotation.


EM is about rotating fields. Phase is very fundamentally connected with rotation. These are where complex numbers show up.


I think it makes even more sense when written in terms of tau.

    e ^ (i * tau) = 1
In plain English: “rotating by a full turn is the same as doing nothing”.


this, and the unit circle are by far the most compelling arguments for tau.

Euler's identity isn't some mysterious thing where all these mathematical building blocks meet, its a completely straight forward and fundamental consequence of circles existing!.


2pi is a full turn. Using a different name doesn't change much.


I've never considered that (in itself) a satisfying or complete explanation, because it just handwaves away the problem of how it is that imaginary exponentiation gets you a rotation in the complex plane, and feels like you're just defining it as such, which isn't imparting any insight.

Yes, small children can understand the bit about "two half turns put you in the reflected position", but not "why is i rotating a complex number at all".


> but not "why is i rotating a complex number at all".

Yes, this part requires getting a good amount of practice with vectors, and is best saved for kids aged maybe 10+ (depends a lot on the kid and their level of preparation).

The most straight-forward approach is to define I to mean a quarter turn rotation anticlockwise in the plane. Then try to figure out what a quantity like Z = a + bI (where a and b are scalars) would mean: if we multiply it by a vector v we get Zv = av + bIv, that is: a part of the vector av pointed in the same direction as v plus a part of the vector bIv pointed in a perpendicular direction.

Now we can investigate what happens when we multiply I(Iv): we rotate v a quarter turn, then another quarter turn. Or in other words, I(Iv) = –v for any v. Because our multiplication is associative and vectors have inverses, we can write II = –1.

The tricky part of the abstraction here is that we can treat scale + rotation transformations of the plane like numbers. Multiplying them corresponds to composition of transformations. Adding them and then applying the sum to a vector is the same as applying each separately and adding the parts afterward.

Getting comfortable with this algebraic system is certainly not trivial.

Then there is a neat insight that when we take a pure rotation’s logarithm (a.k.a. “angle measure”), that turns out to have magnitude proportional to the arclength between two rotated points on a circle.


>The most straight-forward approach is to define I to mean a quarter turn rotation anticlockwise in the plane

Like I said originally, if you're just defining i to work this way, you're not conveying any of the insight behind why this is a natural extension of the existing rules/definitions of i, sine, exponentiation, etc.

All your latest comment is doing is meticulously spelling out the concept of rotation in a plane. Again, that's not the hard(est) part of proving this result or of conveying the intuition. You're handwaving away 90% of it. It's not reasonable to characterize someone as "understanding" it because they get rotation, the last 10%.


The “existing rules/definitions of i, sine, etc.” are poor places to start.

There are many ways to define these concepts. The traditional versions are needlessly obscurantist and get the appropriate pedagogical/conceptual order backwards.

The proper pedagogical definition for I is “the ratio of two vectors of the same magnitude which point perpendicularly in the plane”, or “the transformation which rotates vectors in the plane by a quarter turn”. Defining I as √(–1) and then working with purely formal quantities of the type a + bI is much harder to follow. It just seems completely arbitrary and invented (which is why people had trouble with it historically, and many students still do today).

If you define I to mean a quarter turn, then having it square to a half turn makes perfect sense. Once you know that a half turn is equivalent to a reflection across the axis of rotation, then it’s pretty clear to see why a half turn of planar vectors should be written as the scalar –1. So the property that I^2 = –1 emerges naturally.

Then when you sometime later start talking about logarithms, there’s a nice opening to talk about what the logarithm should be of a rotation. This ties in nicely with a discussion about position, velocity, and acceleration in uniform circular motion, etc.

* * *

It seems like there is some “deep insight” when you start with very obscure concepts/notation defined purely formally/abstractly and then fiddle with them a bit and suddenly out pops something simple and concrete. It makes for a good magic trick or punchline at the end of the tedious slog that is a traditional math course.

But it’s better to start with simple concrete ideas and notation which are meaningful a priori.


>Then when you sometime later start talking about logarithms, there’s a nice opening to talk about what the logarithm should be of a rotation

Right, so you agree your explanation isn't covering the connection to the existing rules of i, and is just explaining how rotation works and giving it a symbol. That's great, but it has nothing to do with the insight that people are actually impressed by, which is that the concepts and rules created for a different domain (e.g. exponentiation and i as square root of -1) naturally extend in such a way that gives the Euler equation and so on.

You're not explaining that insight at all, and shouldn't consider anyone to actually understand the insight if they get it. All you're explaining is the concept of point rotation. You shouldn't represent that as "oh, a child understands why exp(pi i) = -1, see, I explain it just fine!"

If you prefer teaching things in that order, great! If you're dismissive of formal and abstraction notation, great!

But you're not actually explaining the amazing insight everyone here is celebrating.


You seem to be misunderstanding what I am trying to say. Sorry if it was unclear.

If I am explaining to small children that a half-turn rotation is the same as a reflection through the axis of rotation, then there is no symbol at all. Just words and physical manipulation. It’s a simple idea.

If I am explaining to a high school student (or a well prepared 11 year old) what complex numbers mean, then we are going to start by doing a bunch of discussion of vectors in the context of geometry and mechanics.

If I am explaining how logarithms work, we are going to attack them from many directions: iterated multiplication and compound interest, exponential growth/decay, velocity proportional to current position, uniform circular motion, ...

There are many subtle and interesting concepts involved here. It’s worth taking them slowly and spending a few years building up fluency.

There is indeed an insight that the logarithm of a rotation is proportional to arclength. That’s really the key insight we are talking about here. Why does that happen? That part is pretty interesting and worth exploring (but not nearly so difficult or mind blowing as it is made out to be).

Iterated multiplication of rotations behaves similarly to iterated multiplication of scalars, and we can take logarithms of rotations and then add them just like we could do with logarithms of scalars. Uniform circular motion turns out to be a type of exponential growth.

Now if you take a rotation R and look at the arclength from some vector x to Rx on the circle (an abstract circle in the space of displacement vectors) of squared radius x^2, and then you iterate R, you get proportional arclengths.

That is, arclength(arc of origin-centered circle from x to RRx) = 2 × arclength(arc from x to Rx). And the extends to any other number of iterations of R. More generally if we compose two rotations then we add their arclengths.

So arclength of the circular arc is proportional in general to the logarithm of rotation, and we can therefore use arclength as a model of the logarithm. (This is called “angle measure”, and we can physically measure it with a protractor).

> concepts and rules created for a different domain

The history of the understanding of astronomy, complex numbers, etc. is not really relevant at the introductory level.

Slide rules and protractors are however useful tools to teach about.

Leading off with power series, classical trigonometry, complex numbers defined as purely abstract formal objects invented for finding roots of polynomials, etc. is not “insightful”, it is just obfuscatory. It is a product of our current anachronistic approach to teaching which is based on training human computers even though we no longer need them and neglecting problem solving, and focusing almost entirely on algebra and a symbol-heavy framing of “trigonometry” and calculus at the expense of geometric understanding.

At some later point it’s all fine and dandy to talk about the history of astronomy and chord/sine tables, the development of differential equations and the understanding of 2D and 3D rotation: Hipparchus, Archimedes, Ptolemy, Madhava, Al Kashi, Bürgi, Napier, Mercator, Newton, the Bernoullis, Euler, Argand, Rodrigues, Gauss, Cauchy, Riemann, Hamilton, Grassmann, Cayley, Maxwell, Gibbs, Clifford, Möbius, Klein, Lie, and all the rest. But not as an introduction.


Well, the first time it made a tick for me was when I realised that rotational exponentiation is what provides for perfectly smooth transitions between even and odd powers of negative numbers.


From this point of view, it's not easy to see what exp(1) should be. The beauty is in the connection.


What the precise value of exp(1) turns out to be isn’t really the point though.

That’s one reason that e^x is a pedagogically problematic shorthand for exp(x).


I disagree, I think it's very much essential to the point.

For one, 10^(i*pi) is some horrible complex number, which is what you'd expect when exponentiating random complex numbers. Exercise: figure out what it is from the rotation point of view.

Secondly, usual definitions of e don't involve complex numbers at all. Nor do questions such as whether e is rational, algebraic, etc.

Thirdly, if you go with this argument too far, you'll conclude that the value of pi is also not important. After all, circles must have some length, who cares what it is precisely?


> 10^() is some horrible complex number

Sure, you want to use a convenient coordinate system. But you already have 2 horrible numbers in the conventional coordinate system: the π and e.

In the conventional (“natural”) coordinate system for logarithms, the coordinates represent powers of e and rotations by 1/2π turns, respectively.

You have just decided that you’ll use those specific horrible numbers in the logarithmic coordinates, because e and π are what you get out if you choose a coordinate system for the logarithm of quotients-of-vectors where the derivative of log(x) at the point x = 1 is 1. This turns out to be convenient for simplifying differential equations and related formulas.

Personally I like using a coordinate system where the “scale” axis of the logarithm uses units of doublings, and the “rotation” axis of the logarithm uses units of full turns. If I use that one when making a picture, I can easily tell you exactly what the value is (as a rational number) at every grid intersection.

e.g. here is a plot of a Möbius transformation, https://raw.githubusercontent.com/jrus/images-for-observable... Every contour at the most prominent level is either a doubling or 1/12 of a turn. (You’ll notice that when we zoom on any part of the picture, we see little rectangles rather than little squares; the “natural” coordinate system is a square one, which is also often convenient.)


You explained that 'a half turn rotation in a plane is equivalent to a reflection across the axis of rotation' to a child and they were able to figure out how 'exp(i*pi)=-1' was derived? That's remarkable!

I think these children were on a much higher level than most adults that we'd still consider smart, I wouldn't put down anyone for not immediately seeing these two things as equivalent.


No, the children were able to understand how if you rotate a piece of paper by half a turn, all of the points end up the same distance from the axis of rotation, but the opposite direction from where they started.

All of the stuff about exponential functions and angle measures and “imaginary” numbers just obfuscates the core idea.


But the core idea involves why imaginary exponentiation means a rotation in the complex plane! If you don't have that part, you're not talking about i at all, you're just teaching the concept of rotation, with the same symbols as are used in complex math.

You would have the exact same explanation if you were teaching about rotation unconnected to imaginary numbers or exponentiation at all!


But that's just false. Rotations preserve orientation, reflections do not. They can't be equivalent. Two reflections can be equivalent to a rotation however, because that restores the original orientation.


> But that's just false. Rotations preserve orientation, reflections do not. They can't be equivalent. Two reflections can be equivalent to a rotation however, because that restores the original orientation.

I believe the parent meant the statement as it applies to a single point. E.g. meant to say "a half turn rotation in a plane [of a point] is equivalent to a reflection across the axis of rotation."


A reflection across the axis of rotation, which is a point, not a reflection across a line.

–1 is the quotient of any two vectors which have the same magnitude and point in opposite directions.


That is the best explanation I've ever seen.


It's kinda sad that formalism here is far less clearer than natural language. I often wonder if instead of formalisms, mathematicians should just have used attempto controlled English.


They tried that before the nineteenth century and it stymied progress. Imagine multiplying just two-digit numbers without any notation.


They didn't tried attempto controlled English in the nineteenth century as it only exists since end of twenty century.

Imagine multiplying just two-digit numbers without any notation I agree, notation can be useful for concision. Far less for clarity. Instead of a black or white thinking, I'm wishing that mathematicians use more well defined English in their proof, but still use notation where it is clear and more concise.


I was never very good at geometry and this explanation has me feeling like a really stupid child.



That makes a lot of sense now -- thanks!


The surprising thing is that a bit of arithmetic results in something like geometry.


For me it would be suprising only if the equation would be about real numbers only. Exponentiation of complex numbers is defined in a way that this is trivially true. Or at least this is my perception as a non math-expert.

My approach to Euler's equation (as a non-expert) was the following:

1. Try to understand the meaning of the operations in the equation. Search for the definition of exponential on complex numbers, because it was not trivial for me how it is defined on complex numbers.

2. I have read that it is defined by angles and the unit circle: e(alpha * i) is the point on the unit circle at angle alpha.

3: Looking at the equation: this is trivial, it basically says that cos(Pi) is -1.

What am I missing?

Edit: And why is exponentiation defined this way? Why is the base e? Why isn't it 2, like this:

"2^(alpha * i) is the point on the unit circle at angle alpha."

Would this definition lead to contradiction or some difficulties?


That is not actually the way to define e^x for complex numbers. There are a few ways you can do so. The most natural way is as the unique solution to the differential equation

df/dx = f

The solutions are e^x + const; requiring f(0) = 1 gives you the choice const = 0.

Another way to define it is as the infinite series

e^x = \sum_{i=0}^inf x^i/i!

It is pretty easy to show this definition is equivalent to the previous one. A third way is as

lim_{n -> inf} (1 + x/n)^n

This definition comes from the intuition that e^x represents the limit of continually compounding interest. As before, it is pretty easy to show that it is equivalent to the previous two.

In any case, all these definitions extend directly to complex values of x. The fact that

e^{ix} = cos(x) + i sin(x)

holds (and hence Euler's identity holds) is a consequence of a more natural definition, not the typical definition.


Further evidence that e^{ix} = cos(x) + i sin(x) is natural is that it fits in nicely with power series representations. If you define cos(x), sin(x), and e^x by their power series centered at 0 then it's straightforward to see that substituting ix into the power series for e^x yields the sum of the power series for cos(x) and i*sin(x) (as long as you accept that theorems about absolute convergence and rearranging terms extend to the complex numbers).


The lecture in my Calculus IV course where we walked through that derivation is literally my only memorable math lecture in university. It just so beautifully ties so many different concepts together!


i x is x rotated left by pi/2

df/dx = f

df(ix)/dx = i f (ix)

so the "velocity" is always orthogonal to the "position". Thus the solution to this equation in the complex numbers has to be a rotation.


The derivative of e^ix is ie^ix. Multiplication by i is equivalent to rotation by 90 degrees. The derivative of velocity is acceleration. Acceleration perpendicular to velocity is circular motion.

With base 2 the motion is not circular.

All these pieces (and an initial value problem differential equation) come together to explain the identity.


The surprise is that the definition you gave in 2. fully agrees with the definition of multiplication that comes naturally from the cartesian version of complex numbers: z = x + bi -> (x1 + b1i)(x2 + b2i) = x1x2 + x1b2i + x2b1i - b1b2

Vector multiplication has two very different forms (dot and cross), so multiplication (and by extension exponentiation) working out smoothly is neat.


Complex exponentiation is not defined arbitrarily to make Euler identity true. The Euler identity is a consequence.

I love the Euler identity in its typical form, but I think it is more surprising when you replace e and pi by their (approximate) numerical values and i by sqrt(-1)


> Why is the base e? Why isn't it 2, like this

One of the many things that makes e special is that e^x is the only function that is its own derivative. One of the things that make sin and cos special is that they are each other's derivatives (except for a change of sign in the case of dcos/dx). This doesn't prove anything on its own, but it suggests that there might be some relationship between e and the trig functions, and Euler's equation proves that indeed there is.


I suspect it would spiral off the unit circle.


I will take the contrary viewpoint. It matters what mathematics we consider to be beautiful and intuitive, because better mathematical intuition will make you better at solving mathematical problems.

On that note, I think e^i pi = -1 is drastically overrated. That is typically the definition of exponentiating to a complex power, that e^ix = sin x + i cos x. Of course something can be beautiful if you take it to be an axiom. It’s like saying x=x is beautiful. Also it’s really just sticking -1 in the more-intuitive, more-useful formula so that it looks cool.

A really beautiful formula is that x^p - x is divisible by p when x is an integer and p is prime....


Though it’s probably the most commonly used definition for complex exponentiation, e^i pi=-1 is not an axiom— it is derived from the basic rules of integer arithmetic expanded in scope until they make a closed system.

And from this, we can derive all of the trigonometric functions using only algebra and no geometry.

(See http://www.feynmanlectures.caltech.edu/I_22.html , particularly section 22-5)


> I think e^i pi = -1 is drastically overrated. That is typically the definition of exponentiating to a complex power, that e^ix = sin x + i cos x.

Yes, but it's the definition of exponentiating to a complex power because it is easily proven by reference to the real Taylor series of those functions. The complex definition is motivated by the real result, not -- as you imply -- the other way around.

(Also, you have it backwards; e^ix is cos x + i sin x.)


You can easily derive e^ix = i cos x + sin x.

Just look at the Taylor series for e^x and compare with the Taylor series for sin x + cos x. You'd see that they are basically the same, just that the trigonometric functions alternate their signs every 2 steps. How do we alternate the signs in e^x? We just add a factor of i, that way we get alternate signs every two steps but get an additional i on every odd step. Odd steps are sin, so just add i to sin and we are done: e^ix = i sin x + cos x.

Now we also see that you remembered the wrong formula, it the sine part is imaginary since it corresponds to the odd factors.


I have no problems with this route to show that e^ix = cos(x) + i sin(x) but to do it properly does require showing that this is the unique analytic continuation of the exponential function.

The alternative method is to note that exp() is defined as the unique function satisfying exp(0) = 1 and dexp(ax)/dx = a exp(ax). So exp(ix) describes a point moving orthogonal to it's current position, which means it's a circle, and since it's speed is proportional to it's distance from the origin it is moving at a constant speed. So e^ix is just a point x radions around the unit circle.


(it's the other way round, e^(ix) = cos(x) + i sin(x))


Addition is commutative.


Not sure whether that was meant as a joke or not, but the difference is which term is real and which is imaginary, not the order of the terms...


If you're curious about this, betterexplained has a good article on Euler's Formula:

https://betterexplained.com/articles/intuitive-understanding...

As a bonus, understanding how it is related to rotation may give you a new insight on the Fourier Transform: https://betterexplained.com/articles/an-interactive-guide-to...


For me the most mysterious equation is:

⌊π⌋ - ⌈e⌉ = 0

This equation shows connection between pi, e, zero and number theory.


I mean, you could arbitrarily chop the precision of all sorts of things to make them "equal". It hardly seems significant or illuminating.


It's a joke.


  Let's just let each other be excited about our favorite math, okay?
I don't think it's malicious, we're just trying to narrow down exactly what makes something interesting. In the process, we get to understand our favorite math more deeply.

  Nothing is surprising if you've seen it before.
On the other hand, if you can find a theory that makes a miracle look mundane, then the theory is probably interesting. So finding reasons why something is `not surprising' is a good guide to finding interesting math.


You mean the seven fundamental constants?

e^(i * pi) + 1 = 0 * 17 * 42


Roger Hui: In the year 2033 Earth was discovered by Survey Fleet MCXII of the Galactic Empire. The Emperor ordered Earth to send a representative to the court, with strict instructions to “bring something beautiful”.

Emperor: What beautiful things does Earth have?

Earth Representative: Your excellency, our mathematician Euler proved in our year 1737 that +/(1+⍳∞)-s ←→ ×/÷1-(⍭⍳∞)-s

Emperor: What is the ⍭ symbol?

Earth Rep.: ⍭i is the i-th prime.

Emperor: And what is ∞? Does it do anything useful?

Earth Rep.: It denotes infinity, your excellency.

Emperor: (ponders equation for a minute or two) Respect!

Emperor: Neat notation you have there. Tell me more.

Earth Rep.: Your excellency, it’s called APL. It was invented by the Canadian Kenneth E. Iverson …


Still not sure what i and 17 are...


its a joke about how Euler's identity isn't normally simplified and written as:

     E^(iπ) = -1 
because people wanna be "cute" about it containing 0 and 1, so why not a bunch of other arbitrary numbers too?


Setting an expression equal to 0 is quite standard form.


Number i is the complex unit, but I've no idea about 17 either.


It's the first arbitrary number. 4711 is the second arbitrary number. They were used as the numeric foo and bar when I studied programming in high school and college during the 90s. I always thought it was universal hackerdom, but I actually rarely see them referenced. Perhaps it's a European thing?


Do you have a source on "arbitrary number"? There were no obvious explanations online.


It's not a mathematical or in any way strict thing. It's just a number chosen to exemplify any number. Just like "foo", "bar", "baz", "qux", etc, a sequence of numbers could be e.g. 17, 42, 4711. I grew up with it, and googling it seems that at least 4711 is a German thing.


I was thinking, what is the smallest number not found in any sequence on https://oeis.org/?


17 is the favorite number of David Kelly, a mathematician and professor of Hampshire College. He loves it for a very large number of reasons. He was instrumental in creating a summer camp at Hampshire which has trained many young brilliant mathematical minds. (Including Vi Hart). Thus it has spread.

http://www.vinc17.net/yp17/index.en.html


Following the links from that link, I find wikipedia for 17 says, "Carl Friedrich Gauss (1777–1855) showed that two regular "heptadecagons" (17-sided polygons) could be constructed with ruler and compasses."

(I learned it as straightedge and compass.) Wikipedia did not mention the story that Gauss has a regular heptadecagon inscribed on his tombstone.


You can find such "collections" for numbers like 7, 8, or 13, even ignoring the integers 0-4, and 42 is obviously a reference to Hitchhikers guide, a work of fiction. Am I missing something?


Surely he loves it for 17 reasons.



There’s nothing magical here, the first term describes a 180 degree rotation around a circle of radius 1, landing you at -1. Then you add 1 and you land at the Center of your circle, (0,0).


So why does the first term describe a 180 degree rotation?

People learn that 2^3 means to take three copies of 2 and multiply them together, and that i^2 is -1, so what does it mean to take (i pi) copies of e and multiply them together?

The part you are describing is not where the magic resides ...


Well, there is nothing magical in a magic after you explain it.


But why does rotation work as a model of complex numbers? It's (mathe)magical!


How do you like:

e^(i * tau) = 1

https://tauday.com/tau-manifesto


I actually find this less aesthetically pleasing. It removes the addition identity constant (0).

But that's subjective, who cares?


e^(i * tau) - 1 = 0


I recently learned:

> It has been claimed that Euler's identity appears in his monumental work of mathematical analysis published in 1748, Introductio in analysin infinitorum. However, it is questionable whether this particular concept can be attributed to Euler himself, as he may never have expressed it. https://en.m.wikipedia.org/wiki/Euler's_identity

But it is still cool.

https://youtu.be/yPl64xi_ZZA


I like your style. Mine is why is the kissing number for circles in the plane 6? Why is it even an integer? Seems totally rigged!


I too have spent some time pondering this equation. I've written down some of my findings.

https://nixpulvis.com/math/00-euler


3blue1brown has an excellent and concise video explaining this:

https://www.youtube.com/watch?v=v0YEaeIClKY


https://www.smbc-comics.com/comic/2013-04-02

An extension of this I find even more surprising


I prefer e^(i * 2pi) = 1 (or any multiple of i 2pi).

2pi (or tau) is simply a more fundamental constant for nearly all formulas related to trigonometry and DSP (excluding the sinc function).


I find it amusing that many replies to your comment illustrate your point very well.


If we just throw our hands up in the air and declare everything a "miracle" we never learn anything.


Fun fact: Gauss is said to have commented on this equation that "if this was not immediately apparent to you upon being told it, you would never be a first-class mathematician".


It is’t the exponential function that’s a miracle, it’s the Maclaurin series. Think about it. The Maclaurin series says that in a very real sense, the value of a function at zero, and the value of all of its derivatives at zero, contains all of the information there is to know about it. Intuitively, this should be completely unexpected. Why in the world would the derivatives of a function evaluated at a single point tell you anything whatsoever about the value of the function arbitrarily far away from that point?

EDIT: as mentioned below, this is not true of all functions, even all functions that are infinitely differentiable at zero. But it is true for very large classes of functions.


> Intuitively, this should be completely unexpected.

It actually makes intuitive sense to me. I like to think of it this way:

1) If we have the value of a function at a given point and we want to extrapolate the function's values before and after that point, we need to know how the value is changing at that point: the derivative.

2) But if that derivative isn't constant, that won't get us very far. We also need to know how the derivative is changing at the given point: the 2nd derivative.

3) But if that 2nd derivative isn't constant, that won't get us very far. We also need to know how the 2nd derivative is changing at the given point: the 3rd derivative.

And so on, potentially up to infinity. But once we take into account all of the derivatives, we know how the value of the function is changing and how that change is itself changing, and as there is no additional change that comes "out of nowhere", so to speak, we have enough information to calculate the value at any other point.


Your explanation is a nice restatement or interpretation for what the equation f(x)=f(0)+f'(0)x+.... is saying, but it doesn't give any insight for why the equation is true.

     and as there is no additional change that comes "out of nowhere", so to speak,
The whole explanation rests on this key point, and in fact this is the unintuitive part. In fact, it's not even always true. It's a "miracle" that it sometimes is true.


That's an interesting reaction... imho, the idea that change can't come out of nowhere is the most intuitive part. "How can something arise from nothing?" is a puzzling philosophical question precisely because our intuition is that it's impossible, that everything has to come from somewhere (and conversely, nothing can simply disappear). Functions where this expectation is violated -- e.g. non-constant continuous functions where all derivatives vanish to zero at a certain point -- feel like bizarre exceptions to me.


Fair enough. I do see how you could think that it's intuitive. I've even used your explanation when I'm teaching. Although I can't help but think there's some sort of circularity going on here, at least for me. My intuition on the subject is shaped by my knowledge of the theorem, and the functions that I tend to work with.


To know everything in derivatives you need to find the one where it's constant or a closed form series and its limit at infinity, otherwise you know nothing about how it's changing.


More than not always true, it's mostly not true. It's just true for a lot of functions we're interested in.


This is how I think of it too, and it makes some part of me want to believe that means I have an intuitive grasp do what’s going on. Except I don’t think that’s how math works. Rigor is necessary because an intuitive hand-wavey understanding is often wrong. I think there’s just as convincing a hand-wave available that dismisses the idea.

1) We have one point of the function.

2) We know one point of its derivative. We still don’t know any other points of the function.

3) We know one point of its second derivative. We still don’t know any other points of the function or its derivatives.

...

N) Same deal.

N+1) Same deal.

...

Inf) Now we know all the points in the function and also all the points in all the derivatives.

It looks absurd like this. Not to mention that we start with a countable set of points and derive a continuum from each.


The moment we start taking derivatives we leave the world of countable sets and move to limits and epsilon neighborhoods. Evaluating a derivative at a single point by definition tells us something about the neighborhood of that point. Mentally expanding the series terms to their limit forms helps see that it's not just a random polynomial. I guess the real "miracle" is that for a large class of useful functions those ugly limits evaluate to simple forms.


1) We have one point of the function.

2) We know one point of its derivative. This means we know 2 more points of the function (one step in either direction).

3) We know one point of its second derivative. This means we know 2 more points of the derivative, which in turn gives us 2 more points of the function.

etc.

Or at least, that's the impression I get from this thread, since I wasn't familiar with the series before that.


It's more like at each step from n -> n+1, our understanding grows by some amount. The way you write it seems to imply that no information is gained until some arbitrary point when n is extremely large.


But the problem is that the definitions of those derivatives are for infonitesimal change. What about far away points?


Far away points have properties resembling infinitesimally close points, as long as the function is continuous.

Now when there's a lack of continuity, then we run into a problem.


I'm not a mathematician but I assume there are functions with infinite number of derivatives. For such you cannot know all the values of the function by knowing the values of all derivatives because you would have to know infinitely many numbers.

Whereas if there are a fixed number of derivatives then at some level the derivative is constant and thus you can intuitively think that based on those derivatives you can then "draw" the function.


The definition works for any point. However often for higher functions you need to break the function into ranges to find closed form derivative series and get a useful result.


In-phony-tesimal, I like that word.


I love math.

Thank you for the beautiful explanation.


That's not actually true. There exist functions of a single real variable that have derivatives at the origin that are all zero, yet are nonzero. For an example, see the following:

https://en.wikipedia.org/wiki/Taylor_series#Analytic_functio...

Nevertheless, it is actually true that all complex differentiable functions satisfy this property, which is miraculous.


> There exist functions of a single real variable that have derivatives at the origin that are all zero, yet are nonzero

The function that is equal to 0 for x<1 and equal to 1 otherwise also satisfies this.


The more interesting statement is:

> There exist smooth functions of a single real variable that have derivatives at the origin that are all zero, yet are nonzero

https://en.wikipedia.org/wiki/Non-analytic_smooth_function


fyi, that's the heavyside function


Well, sure. Not all functions are described by their Taylor expansions. But this very large class is. I agree, if the word “miraculous” applies at all in mathematics, then this surely is one.


This seems pretty tautological to me.

  this is not true of all functions... But it is true for very large classes of functions
Which classes of functions? It sounds like you're saying that a function is determined by its derivatives if it's determined by its derivatives.


Not really. Sure, the technical term is an “analytic function”, the definition of which is, essentially, “any function that is described by it’s Taylor series”. The surprising thing is how many functions are analytic. All polynomials. The exponential, and sums of exponentials. The trigonometric functions. Bessel functions. All complex, differentiable functions. Combinations of the above.

Nearly any function you are likely to encounter in a physics class, to the best of my knowledge, is analytic (IANAP).

Another way to look at this is: “almost all” smooth functions are “nearly” polynomials. Why should trigonometric functions be “nearly” polynomials? Why should exponentials? Why should there be any connection at all between the trig functions and the polynomials? I still say this is a surprising result.


Let's take "analytic" to mean "determined by its maclaurin series". It's not surprising that polynomials are analytic. Look at the definition of a polynomial. It's already in "power series" form, in fact there are only finitely many terms!

As for the other examples, here is one explanation for why they are ubiquitous in physics. Essentially, the class of analytic functions is closed under "solving differential equations".

So that's why sin/cos/exp/bessel are analytic - they are solutions to differential equations with constant/polynomial coefficients (we already know that constants and polynomials are analytic). That's why many functions found in physics are analytic - they are created from other analytic functions via a differential equation.

https://math.stackexchange.com/a/190167

P.S: Regarding your last statement "almost all smooth functions are almost polynomial", something much stronger is true: Every continuous function is almost a polynomial.

https://en.wikipedia.org/wiki/Stone–Weierstrass_theorem


Polynomials and complex numbers are connected via Fundamental Theorem of Algebra. Complex numbers can be represented in polar form - trigonometry/geometry, which can be connected via calculus to exponential form - Euler's formula.

http://tutorial.math.lamar.edu/Extras/ComplexPrimer/Forms.as...

https://en.m.wikipedia.org/wiki/Euler's_formula

https://en.m.wikipedia.org/wiki/Fundamental_theorem_of_algeb...


I find integrals even more magical. Just by evaluating the antiderivative of a function at a single point we get a summary of the behavior of the function everywhere between that point and minus infinity!


And like proper miracles, it often takes a ton of work and some divine insight to find an integral analytically.


Which is funny, in the analytical world differentiation is easy and integration is hard, but in the numerical world it's the other way around.


> it is true for very large classes of functions.

While it is, yes, from the point of view of a human counting them, very large, measure-theoretically it is a meager set. If you were picking functions at random from the space of all possible functions, you would never (i.e. with probability 0) pick one with a useful Maclaurin series.


It doesn't appear miraculous because knowing all derivatives for a function is quite a lot of information about a function. A polynomial of degree six can be described with six derivatives but it can also be described with six factors. There is no information advantage.


If you define a polynomial by its coefficients, and then take its derivatives at a point and forget the coefficients, it would be reasonable to wonder whether you had lost some information. The miracle is that you haven't (if you also remember that the original function is a polynomial).


It's true for analytical functions I think?


Analytic functions are usually defined to be those functions which can be represented by their Taylor series. This is what I meant by `tautological'.


It's one of the definitions, and perhaps not the most fundamental.

In the complex plane (and all real analytic functions can be analytically extended into the complex plane), you can define the equivalent property as saying a (first!) derivative has the some value no matter how you chose to take the limit.

So just by virtue of having a well defined first derivative in a particular region you get all the Maclaurin series magic.


This is behavior that could be expected when you evaluate _any_ infinite Taylor series far enough away from where the series is centered. Denominators of a typical Taylor series are fixed and the numerators depend polynomially on x.

The post is definitely an interesting observation. I would chalk the miracle up to Taylor series and analytic functions in general, however.


I guess the miracle is that the series converges everywhere. For example, the Taylor series for log(1 + x) only converges for |x| < 1.


Exactly. Imagine how their mind would be blown when evaluating sin(1000pi), which is exactly 0 with the taylor series of sin(x) centered around x=0.


The exponential captures the idea of slowly perturbing an object by a small amount. Suppose you start with some object, perform some action on it, then add the result back to your original object. On a manifold you can't add objects together, but you can push your object a little bit along a tangent vector (think of the tangent vector just as the slope of a function at the point). If you just recursively do this operation, as long as the amount you push by grows smaller you end up at a well defined point. Effectively you have pushed a point 1 unit along a geodesic (locally shortest path). So you can define the exponential map in much more generality as a Taylor series for more interesting things than just functions on R. For example given a (matrix) Lie group G and its Lie algebra (actually the tangent space at the identity), the exponential map

  exp : g -> G
is defined by its Taylor series

  exp(X) = 1 + X + (1/2) X^2 + ...

(X is some n x n matrix).


Everything you said seems to be valid if you replace 'exponential' by 'f' where 'f' is any real analytic function.


First of all I was capturing the defn

exp(f) = lim_{n \rightarrow \infty} (id + f/n)^n,

but yes there are lots of other functions you can define more generally. The exponential just comes up particularly naturally.


Ah, I didn't realize that's what you were trying to say with the explanation at the beginning. Now your post makes more sense.


I'm late to the party. But let me offer first the finite-differences linear equation

    A[n]=A[n-1]*k
For A[0] = 1, the solution is the sequence

    1, k, k^2, k^3, k^4, k^5, k^6....
If we let k be the complex number i, this becomes

    1, i, -1, -i, 1, i, -1, -i, 1...
As you can see, the introduction of imaginary parts extends the original idea of powers-of-k (which if real go exponentially to infinity if k>1 and to 0 if k<1) to an alternating oscillatory pattern both in the real parts

    1, 0, -1, 0, 1, 0, -1, 0, 1
and the imaginary parts. What you're seeing there is almost the celebrated relationship between imaginary exponentials and sine functions.

Now: that was the finite-differences linear equation. If instead we take the differential linear equation

    f'(x) = kf(x)
we get the exponential function. Again, if k=i, you get oscillatory behavior.


This is a special case of a more general phenomenon: quite a lot of probability density functions (of which the exponential function is an example) can be expressed as alternating sums containing huge combinatorial terms. As if by magic, the result is always between zero and one. It’s actually a large headache for computations.


Huh? Probability density functions aren’t bounded between 0 and 1. Their integrals are exactly 1, but that’s considerably less interesting.


Sorry, you are right, meant to say cumulative density function. (Or survival function, as in the case of exp(-x).)


True, but this has never felt like math magic to me. Usually you come up with a magical function for the PDF or similar, and then normalize it so that the CDF has the right integral. That last bit is just drudgery, not magic.

If you’re lucky, you can work with unnormalized distributions.


For example, the Dirac delta has an unbounded height and integrates to 1.


To be pedantic, the dirac delta is not a probability density function.


This isn't pedantic, it's irrelevant. It is a generalized density function. Whether or not it represents probability is up to the way you're using the function.


Yea, it's one thing to cancel out perfectly in theory but sums of numbers with big differences in magnitude aren't great for numerical stability.


But this is the case for the polynomial approximation to just about anything. The individual terms yield wildly different numbers that cancel each other out, and individually they approximate the desired function poorly.

As a joke, a few years ago I created a polynomial approximation for Fibonacci.

fib(9) suddenly goes beserk and calculates the answer to life, the universe and everything!

https://news.ycombinator.com/item?id=14331627


You should approximate it with an exponential function. Actually, the nth Fibonacci can be obtained exactly by rounding an exponential function:

https://en.wikipedia.org/wiki/Fibonacci_number#Computation_b...


I'm well aware of the closed form for Fibonacci since my undergrad days. This was produced as a kind of joke. I simply took the first several, plus 42 thrown in, and fit them to an N-th degree polynomial.


This is a very poor description of this formula for e^x - it isn’t a miracle at all that it converges!

It’s pretty simple. The numerators are x^n. The denominators are n!. For any x, the denominators grow faster than the numerators, so it’s no surprise that it converges. The ratio only starts shrinking when n > x but you can see that with high school math.

Somehow all these largish random numbers manage to cancel out almost completely.

No, they aren’t “largish random numbers”, it’s just the ratio between two series, one of which grows asymptotically faster than the other.


Converging isn't a surprise. This giant mishmash somehow converging near zero is a surprise, and getting something with e is even more of a surprise.

Nowhere in the post does it express surprise about the fact that each series converges on something.

Separately, "largish random numbers" is a perfectly fine way to describe the start of the series, which is the most important part for influencing what it eventually converges on. Somehow despite each series getting increasingly enormous before dropping back toward zero, the convergence point ends up less and less perturbed by those enormous terms.


From my non mathematician view, it's not one function per se, but how they all fit together in an ideal framework.


meanwhile, i just thought it was cool that you could draw a line where it's slope was equal to it's value.


It's beautiful math, yes. But a miracle? I don't know.


This seems intuitively obvious, not a miracle.

Take every other element in a series that has any regularish omnidirectional trend, alternate addition and subtraction, and you are quite likely to end up with a zero convergence.


I'm not sure exactly what you mean here but there I can't think of an interpretation that makes your statement close to being true.

e.g. 1+4-9+16-25+36-49=-26


for ages we've been adding the miracle tag for things we don't fully understand.

just get used to it.


That doesn't really show that it's amazing. You could say the same about x-x=0, even for large x. Or about any converging series. Unless all of that is amazing too - I wouldn't know, not being a mathematician.


You gave two scenarios which are worth addressing:

"x-x=0". Here we have two terms (x and -x). Their magnitude grows at the same rate, so it is not surprising that their difference is fixed and the two-term "series" "converges" to zero.

"Any converging series." Consider the exponential growth function: e^x = 1 + x + x^2/2 + x^3/6 ... + x^n/n!. This grows fast, but we know that the factorial is super-exponential so the terms must approach zero for any fixed x. If the series converges (true though not obvious) then the value must increase very quickly.

Now consider the exponential decay function. Unlike x-x, the terms are raised to different powers, so they grow at different rates. And unlike the exponential growth function, the series converges to zero in the limit.

The "miracle" is not the convergence but how it happens: terms growing in opposite directions at wildly different rates. It's as if a lopsided spaceship chaotically fired rockets at full power in all directions, and happened to stay exactly still.


I see, it's more amazing than my simple examples. But why is it more amazing as this presumably uninteresting series I just made up

sum(n=1...infinity, 1/2^n - 1/(2^n+1))

That also has alternating terms that grow differently in opposite directions and have huge numbers, yet they cancel out so that it converges to something. Is that uninteresting because each term in a pair has the same exponent, or because it doesn't converge to something that's closer to zero as something in the terms increases?

Maybe you can't easily just make up a converging series that has all the features they listed and that uniqueness makes it interesting?


I would say it's not as interesting because a high-schooler (or Pythagoras) could easily verify the fact AND they would easily be able to explain how you came up with that sum.

One of my metrics for determining if a fact F is interesting is if P(F) is much larger than V(F).

Here P(F) is, vaguely speaking, the amount of effort required to "explain" or "generalize" the fact, and V(F) is the amount of effort required to verify the fact. In the case of e^{-x}, each individual example can be verified by an elementary schooler so V(F) is very small. On the other hand, I don't see how you could explain the phenomena of convergence to 0 without teaching the elementary schooler a large part of calculus, so P(F)/V(F) is large.


> and happened to stay exactly still

Or, perhaps, happened to make a soft landing on a specific planet!


Write down the first 50 terms of the series for e^{-x}. Go back in time and tell Pythagoras to plug in x=2,3,4,5,6,7,8,10,...

Each of the terms is a rational number, and the each of the sums will be an extremely small rational number (numerator will be tiny compared to denominator).

Don't you think Pythagoras would be impressed and mystified? Would he be able to come up with an explanation for this `coincidence'?

Now try the same with with x-x. That's not going to impress Pythagoras.


On the face of it, if someone told me a series converged on a small value like -0.4, I wouldn't expect to encounter 43 million along the way.

It's easy to dismiss things as uninteresting but HN is a forum for nerds and I'd conjecture that things like this are interesting for that group.


Honestly, that was either too trivial, or underexplained for it to be a nerd-content. Maybe I’m watching too much pbs and numberphile, but nothing seemingly amazing here.


You're obviously just jaded. I agree with the post, I'm surprised when a large list of seemingly random large numbers cancels out like that. The secret of course is that they aren't random, even though they appear so to the naked eye.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: