Ask HN: What is your favorite equation? - pcarolan
======
pacaro
When I first started chatting online with the woman to whom I'm married, she
told me that she had a single tattoo, and that it was of an equation.

I immediately guessed Euler's identity and wrote "$e^{i\pi}+1=0$"

That was when we knew it was serious.

I got lucky (in more ways than one) because she'd nearly gone with her other
choice, the continuum hypothesis

~~~
davidme2
Was this a random stranger you started chatting with online or did you know
her in real life before?

~~~
pacaro
We had exchanged messages on okcupid and had progressed to gchat. At the time
she was in New York and I was in Seattle

~~~
curiousgal
A success story, congratulations!

------
fusiongyro
It looks like an equation, but it's really almost a Y combinator, almost in
Haskell:

    
    
         y f = f (y f)
    

Though this won't exactly type check, it's a pithy reminder of what I found so
powerful about functional programming, which is that _any_ repetition in the
code ought to be something you could factor out. The Y combinator shows you
that _even the idea of a function calling itself_ can be factored out. It's
hard to imagine you would really ever be forced to repeat yourself if this can
be factored out.

Of course that isn't the real point of the Y combinator, but this thought,
this "equation" completely changed the way I think about programming.

~~~
harpocrates
This works in Haskell just fine, thank you very much!

    
    
        ghci> y f = f (y f)
        ghci> factorial = y (\f x -> if x == 0 then 1 else x * f (x-1))
        ghci> factorial 5
        120
    

However, the Y combinator as defined in a non-recursive form is tougher to
implement in Haskell [1][2]. It looks a lot better in Racket (and this is
coming from someone who doesn't particularly like Racket).

    
    
        (λ (r) (λ (h) (h h)) (λ (g) (r (λ (x) (g g) x))
    

[1]
[http://stackoverflow.com/a/5885270/3072788](http://stackoverflow.com/a/5885270/3072788)
[2]
[http://stackoverflow.com/a/13119751/3072788](http://stackoverflow.com/a/13119751/3072788)

------
latenightcoding
The normal equation:
[https://essentialproject.files.wordpress.com/2013/12/normal-...](https://essentialproject.files.wordpress.com/2013/12/normal-
equation.jpg)

used to find the line of best fit.

The first time I saw it I was in high school and I was watching Andrew Ng's
machine learning class for the first time. My Math background was not the best
and I didn't fully understand it until a couple of years later when I took
MIT's free online course on linear algebra (OCW 18.06)

~~~
harpocrates
For the interested: take `B` to be the matrix minimizing `(Y - XB)'(Y - XB)`
(the least squares aka the "best fit"), then `B = (X'X)^-1 X'Y`.

Proof: Since `B` is a minimum, the derivative of the expression minimized wrt
`B` must be zero (the "derivative" is taken in the matrix sense here).

    
    
        0 = d/dB [ (Y - XB)'(Y - XB) ]
          = d/dB [ Y′Y − Y′XB − B′X′Y + B′X′XB ]
          = d/dB [ Y′Y − 2Y′XB + B′X′XB ]            Y′XB and B′X′Y are just scalars :)
          = − 2Y′X + 2′X′XB
    

Solving for `B` you get the result.

[1]
[https://isites.harvard.edu/fs/docs/icb.topic515975.files/OLS...](https://isites.harvard.edu/fs/docs/icb.topic515975.files/OLSDerivation.pdf)

~~~
latenightcoding
Or watch this video:
[https://www.youtube.com/watch?v=Y_Ac6KiQ1t0](https://www.youtube.com/watch?v=Y_Ac6KiQ1t0)

One of my favourite math lectures.

------
Tempest1981
The Mandelbrot Set equation, just for the infinite cool visuals, given its
simplicity.

[https://en.wikipedia.org/wiki/Mandelbrot_set](https://en.wikipedia.org/wiki/Mandelbrot_set)

------
deepnotderp
Surprised no one's mentioned Bayes's Theorem yet. It's so incredibly simple
yet underpins so much of statistics.

------
gajomi
At some point in my past I became biased towards inequalities over equalities;
they somehow seemed more fundamental (I have no intention to defend this
sentiment... its probably insubstantial). So there are lots of fun
inequalities, but probably my favorite is the bound on the support of
eigenvalues of a matrix given by the Gershgorin circle theorem
([https://en.wikipedia.org/wiki/Gershgorin_circle_theorem](https://en.wikipedia.org/wiki/Gershgorin_circle_theorem)).

------
emjoes1
I like the wavelength formula because of it's simplicity and practicality.

    
    
      λ = c / f
    

It is a cool way to demonstrate the importance of the speed of light to figure
out the optimal length of an antenna for a given radio frequency.

My son, when asking why knowing the speed of light matters, was impressed when
I showed him we could make our own wifi antenna by using this to figure out
the length it needed to be. Of course antenna design can be complex but this
was still cool to show.

~~~
pitaj
To be more general:

    
    
        λ = u / f
    

Where `u` is the speed of the wave in the medium.

------
n00b101
Black-Scholes Equation [1]

Oh so many reasons to love it and hate it

[1]
[http://www.espenhaug.com/black_scholes.html](http://www.espenhaug.com/black_scholes.html)

~~~
harpocrates
Your link is slightly misleading - it shows ways of approximating solutions to
the equation. The equation itself is a partial differential equation [1].

    
    
        dV/dt + 1/2*sigma²*S²*d²V/dS² + r*S*dV/dS - r*V = 0
    

[1]
[https://en.wikipedia.org/wiki/Black–Scholes_equation](https://en.wikipedia.org/wiki/Black–Scholes_equation)

------
ammmir
euler's identity e^ipi+1=0 of course. it connects all the dots (e, pi,
sqrt(-1), 1, 0), is fun to prove, and just plain cool. i have no idea of its
practical applications though.

~~~
harpocrates
A math professor of mine once said of this (paraphrased): yes it is cool, but
really it is just a trivial consequence of how we defined our extension of the
real exponential map to complex numbers. The really cool part, is that this
extension is the _only_ possible holomorphic (differentiable for complex
numbers) extension. All that to say that the really nifty part of this
identity is the uniqueness of analytic continuations (on the whole complex
plane).

------
countzeroasl
Navier-Stokes. Always had a thing for fluid mechanics.

~~~
jfmiller28
Same here. I'm with Heisenberg, "When I meet God, I am going to ask him two
questions: Why relativity? And why turbulence? I really believe he will have
an answer for the first."

------
softc

        d^2/dx^2 sin(x) = -sin(x)
    

This means that sin(x) is an eigenfunction of the second derivative operator
in the infinite dimensional vector space of functions.

This little fact makes numerically solving second-order differential equations
using a sin(x) basis really cheap, just need to pay the upfront FFT cost...
(O(n log n))

------
hooloovoo_zoo
Euler's Identity. There is so much mathematical content in such a small
equation.

------
pmiller2
V - E + F = 2 - 2g, the defining equation for the genus/Euler characteristic
of a surface. It has applications all throughout topology and graph theory.

------
Mr_P
The rendering equation. Because the fact that it's hard to evaluate means I
have a job.

------
noobermin
If you asked for Theorems, god the conversation we'd be having.

For equation, probably, Maxwell's equations, because I use them in my
research.

------
japhyr
I don't do graffiti often, but when I do this is what I write:

∇ ‧B = 0

I love the simplicity of the equation, and the implications that come from it.

(I only write it in places where graffiti has become acceptable, like some
hike-in cabins. I also occasionally leave it in small writing in the corner of
random whiteboards.)

------
ronilan
_" all animals are equal, but some animals are more equal than others"_

------
monochromatic
[https://en.wikipedia.org/wiki/Residue_theorem](https://en.wikipedia.org/wiki/Residue_theorem)

It just really took me by surprise when I learned it, and it's useful too.

------
jayajay
Euler's formula. This one formula is the most practical formula I've ever
encountered. Once you learn it, you will never stop using it. The Schrodinger
equation is a lot of fun, but you will _likely_ stop using it if you stop
doing quantum mechanics. Maxwell's equations are _beautifully simple_ and tend
to appear in many other contexts other than classical electrodynamics. For
example, in the absence of charge, Maxwell's equations describe free waves --
which can appear in many contexts other than electromagnetic fields.

------
RaitoBezarius
Riemann Zeta function, so mysterious. How could complex numbers and complex
analysis provide so many insights on number theory? That's so intriguing to
me.

------
rimunroe
I can't really pick between the equation giving the Hamiltonian of a system,
or the Euler-Lagrange equation. I'll go with the latter, because I learned it
first. Learning about these in my classical mechanics course suddenly made an
entire class of problems which was previously incredibly tedious to solve fall
away into trivially easy forms. I got a similar feeling to when I learned
about using Laplace transforms in my ordinary differential equations class.

------
micaksica
While not really a mathematical "equation", the Drake equation, simply for the
debate it causes and the ramifications of parameter changes on what we think
of as a race.

~~~
_coldfire
It is an equation, albeit probably flawed somewhere down the line.

N = R* • fp • ne • fl • fi • fc • L

[https://en.wikipedia.org/wiki/Drake_equation](https://en.wikipedia.org/wiki/Drake_equation)

An excellent use of rough statistics to shatter long held beliefs. Even if
life is incredibly rare, it most certainly exists somewhere else.

------
mmmBacon
Del dot B = 0

Gauss's law for magnetism is so simultaneously simple and profound at the same
time. It says that there is no such thing as magnetic charge (monopole).

------
kurthr
The Riemann Curvature Tensor (of General Relativity) written in Christoffel
symbols is quite beautiful.
[https://en.wikipedia.org/wiki/Ricci_curvature](https://en.wikipedia.org/wiki/Ricci_curvature)

Using Einstein summation notation and covariant (raising and lowering
operator) derivatives the curvature of spacetime can be compactly described.

------
jbpetersen

      // since typing it out like this is preferable to reteaching myself LaTeX right now
      
      let favoriteEquation = arr => {
        let total = 0;
        for(let x of arr)
          total += Math.abs(x);
        let normalizedArr = [];
        for(let x of arr)
          normalizedArr.push(Math.abs(x/total));
        return normalizedArr;
      };

------
gaze
The stochastic master equation. It pulls together born's rule, the
schroedinger equation, dissipation, measurement

------
dorianm
e=mc^2

Just the fact that energy can be converted to mass and mass can be converted
into energy is very interesting

~~~
softc
Just to clarify, mass-energy equivalence implies nothing about mass-energy
_conversion_. We've never witnessed such a conversion nor do we know if it's
possible. The implication is that mass difference of an atom after energy
absorption is directly proportional to the energy absorbed. Energy is still
energy in that case, no conversion has taken place.

~~~
Klockan
> We've never witnessed such a conversion

We sure have, LHC creates matter/anti-matter particle pairs using enormous
amounts of concentrated energy.

~~~
softc
[https://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalenc...](https://en.wikipedia.org/wiki/Mass%E2%80%93energy_equivalence#Binding_energy_and_the_.22mass_defect.22)

------
bssrdf
Kajiya's rendering equation

------
gozur88
The _vis-viva_ equation (AKA the law of orbital energy invariance)

v^2 = GM(2/r - 1/a)

It's so simple, and at the same time so useful for orbital mechanics.

------
bsvalley
(Bad idea + Warm referral) = New Startup

------
maj0rhn
de Broglie: E = h * nu

It comes up so often! Whenever someone asks what's new, the answer, of course,
is "E over h." :-)

~~~
jayajay
This is Placnk's Equation. De Broglie's equation is p = h/λ

~~~
maj0rhn
You're right -- thank you.

[https://en.wikipedia.org/wiki/Planck%E2%80%93Einstein_relati...](https://en.wikipedia.org/wiki/Planck%E2%80%93Einstein_relation)

------
jwally
D=Vit + 0.5at^2 Ballistics was the first practical application of non
arithmetic I encountered in school.

------
higgsfield
P(A|B) = P(B|A) * P(A) / P(B)

------
veddox
The Verhulst equation: dN/dt = rN * (1 - N/K)

Probably the most important equation in my field (ecology).

------
longsigh
n+1 is the number of socks I need to locate to find a matching pair where n is
the number of unique styles

------
Nevermark
This equation, and an unanswered question about it, has fascinated me most of
my life:

    
    
      a * b = exp(log a + log b)
    
      Equivalently: a + b = log(exp a * exp b)
    

The reason I like this is that it is the numerical equivalent to De Morgan's
Law of Boolean logic:

    
    
      a and b = not(not a or not b)
    
      Equivalently: a or b = not(not a and not b)
    

They are both dualities, where AND and PRODUCT can be thought of as
intersections, with OR and PLUS being unions.

However their is one crucial difference, in the Boolean case the duality is
based on a symmetric operation, in the numerical case they are not.

    
    
      a = not b, b = not a
    
      a = log b, b = exp a
    

It is very interesting that if you:

1) take a set of symbols, then define various adds and subtractions of those
symbols you will get some result, which will be a sum of positive/negative
versions of those symbols,

2) then replace all your additions and subtractions with multiplications and
divisions of the same symbols,

...you will get an answer with exactly the same form, and with the same
symbols, except with products/divisions instead of additions/subtractions. In
isolation, addition and product behave exactly the same way!

We could call the addition/multiplier operator the combiner operator and the
subtraction/divisor operator the remover operator, and forget about whether we
were doing addition or multiplication and would still get the right answer
either way.

It is only when you combine BOTH additions with multiplications in equations,
that they operate different RELATIVE to each other, with the LOG/EXP relation
"adjusting" values between their additive to multiplicative forms and back.

INSIGHT

So add and multiply are the same operations, but on values that are either the
log or exp version of themselves respectively.

INSIGHT TAKES US TO AN ODD PLACE

Looked at that way, numbers live on an "Addition Ladder" where numbers start
at the bottom rung where addition happens and to move up to multiplication we
apply EXP before continuing to combine them with addition and then LOG them to
get back to our home rung again.

Now that we have two rungs, why not operate at more levels up (EXP EXP) or
below (LOG). The math is easy to do this, but for some reason nature doesn't
seem to give us much use for "super-multiplication" or "sub-addition".

MY UNANSWERED QUESTION

Why doesn't nature use super-multiplication or sub-addition? Or does it
somewhere and I am just not aware of it?

* If anyone has any insight on this, please pass it on! nevermark.mail at marks.house

~~~
softc
I think the similarity to de Morgan's law falls out naturally as a consequence
of Boolean algebra having many similar properties to real number algebra, both
for the multiplication operation and addition operation. Have you studied
abstract algebra? Category theory?

------
pizza
Fluctuation theorem

Shannon entropy

------
rebootthesystem
In thinking about this I wanted to reach for something that fundamentally
affected humanity. And then I came up with nothing. Literally. 0.

Quoting from Amazon:

"The Babylonians invented it, the Greeks banned it, the Hindus worshiped it,
and the Church used it to fend off heretics. Now it threatens the foundations
of modern physics. For centuries the power of zero savored of the demonic;
once harnessed, it became the most important tool in mathematics. For zero,
infinity's twin, is not like other numbers. It is both nothing and everything.

In Zero, Science Journalist Charles Seife follows this innocent-looking number
from its birth as an Eastern philosophical concept to its struggle for
acceptance in Europe, its rise and transcendence in the West, and its ever-
present threat to modern physics. Here are the legendary thinkers—from
Pythagoras to Newton to Heisenberg, from the Kabalists to today's
astrophysicists—who have tried to understand it and whose clashes shook the
foundations of philosophy, science, mathematics, and religion. Zero has pitted
East against West and faith against reason, and its intransigence persists in
the dark core of a black hole and the brilliant flash of the Big Bang. Today,
zero lies at the heart of one of the biggest scientific controversies of all
time: the quest for a theory of everything."

Chapter 0 intro from the book:

"Zero hit the USS Yorktown like a torpedo.

On September 21, 1997, while cruising off the coast of Virginia, the billion-
dollar missile cruiser shuddered to a halt. Yorktown was dead in the water.

Warships are designed to withstand the strike of a torpedo or the blast of a
mine. Though it was armored against weapons, nobody had thought to defend the
Yorktown from zero. It was a grave mistake.

The Yorktown's computers had just received new software that was controlling
the engines. Unfortunately, nobody had spotted the time bomb lurking in the
code, a zero the engineers were supposed to remove while installing software.
But for one reason or another, the zero was overlooked, and it stayed hidden
in the code. Hidden, that is, until the software called it into memory --and
choked."

This is a book about the history and issues surrounding the concept of nothing
and it's mathematical representation, the number zero. It isn't a book about
computers or programming. It's about zero from ancient times to physics.

Great book. Read it years ago. Going to read it again.

Here's the link:

[https://www.amazon.com/Zero-Biography-Dangerous-Charles-
Seif...](https://www.amazon.com/Zero-Biography-Dangerous-Charles-
Seife/dp/0140296476)

------
Animats
E = I×R

