
Does 1+2+3+... Really Equal -1/12? - CarolineW
http://blogs.scientificamerican.com/roots-of-unity/does-123-really-equal-112/
======
Ruud-v-A
Here’s what’s really going on:

You can define a function f from a subset of the complex numbers to the
complex numbers where f(z) = \sum_{n=1}^\infty 1/n^z. Be careful with the
domain of this function: the series does not converge for all z. You can plug
in -1 and see that symbolically, f(-1) = 1 + 2 + 3 + .... But the series does
not converge for z = -1, and it is simply not true that \sum_{n=1}^\infty n =
-1/12; the series does not converge; equating it to something is a nonsensical
thing to do.

What is going then? Even though f is not defined for all complex numbers,
there exist functions from the complex numbers to the complex numbers that --
restricted to the domain of f -- are equal to f. They "continue" f to all of
the complex numbers. And if one imposes a restriction on these continuations
(namely that they are analytic), then it turns out that there is a unique
analytic continuation of f: the Riemann zeta function. And zeta(-1) = -1/12.

Don’t confuse the definitions here. The Riemann zeta function can be defined
as the analytic continuation of the series, the series is not defined in terms
of the Riemann zeta function!

~~~
nwjtkjn
What bothers me is that it seems there could be many ways of writing 1 + 2 + 3
+ ... as the "specialization" of a formal series depending on a variable z,
and for which the formal series converges to an analytic function on some
domain away from the specialization. I can imagine there's a way of doing this
in such a way that the function's analytic continuation to the specialization
evaluates to any number you want, not just -1/12\. However, I'm having trouble
cooking up such an example.

~~~
DavidSJ
A trivial and artificial example is g(z), defined as the sum of G(n, z) for n
from 1 to infinity, where G(n, z) = 0 except if z is -1, in which case it is
n.

Thus g(-1) is 1 + 2 + 3 + ... while g(z) is otherwise 0 + 0 + 0 + ... = 0. And
then the zero function is the unique analytic continuation of g.

The reason people care about the Riemann zeta function is because of its deep
connections to analysis, number theory, and physics.

~~~
nwjtkjn
Hm yeah, I was thinking of something more natural, like requiring the G(n,z)'s
themselves be analytic on some domain. Anyway, sure the Riemann zeta is
important, but I'm not sure how it's canonical.

------
tinco
Tl;dr: No it doesn't. Under some unintuitive definition for infinite summation
that is useful in some physical calculations it does which is surprising.

Under the normal rules which hold for direct use obviously the answer is
positive infinity just like you would be expect, you're not stupid and you
could be a mathematician if you wanted to.

edit: For fun, a short story:

If Muhammed is on top of a strangely shaped mountain that with every step down
gets one step wider. The mountain is so high he can't see the bottom yet
Muhammed wants to move this mountain. So Muhammed starts fetching horses and
ties them to the mountain with ropes to move it. That's a direct use of this
equality, and you can't stand from afar and look at the scene and say "my, I
think that's about -1/12 horses Muhammed is fetching". You'll see Muhammed
taking an infinite amount of time fetching an infinite number of horses, and
you'll definitely seem him do it more than once.

~~~
dskloet
How does an infinite sum of integers appear in nature (physics)?

~~~
inlineint
It appears in some quantum field theory calculations, in particular for
Casimir Effect.

In simple words Casimir Effect consists of a force that emerges between two
conductor planes that are parallel to each other. The force is proportional to
sum of energies of all possible standing electromagnetic waves between the
planes. In calculations for this force a divergent series of sum of all
natural numbers (or their powers) appears and physicists use 1 + 2 + ... =
-1/12 to calculate it (or continuation of zeta function in other points if
appropriate).

[https://en.wikipedia.org/wiki/Casimir_effect#Derivation_of_C...](https://en.wikipedia.org/wiki/Casimir_effect#Derivation_of_Casimir_effect_assuming_zeta-
regularization)

and

[https://en.wikiversity.org/wiki/Quantum_mechanics/Casimir_ef...](https://en.wikiversity.org/wiki/Quantum_mechanics/Casimir_effect_in_one_dimension)

~~~
cohomologo
I've never seen a physics book that treats this using the zeta function,
except popular articles that try to present this calculation as mysterious. In
practice, in my QFT class we needed to compute the sum

lim \epsilon -> 0+ ( \sum_{n=1}^\infty n e^{- \epsilon n} + ...),

that is the series was multiplied by a decaying exponential function with a
rate of decay that goes to zero. This sum can easily be evaluated for small
epsilon takes the form

sum = 1/epsilon - 1/12 + O(epsilon).

The 1/epsilon term (which goes to infinity) drops out of the final physical
result when you do the calculation properly.

------
whack
Just watched the entire video. Using the same "proof" that they used, I can
also prove that 1+1+1+1.... = 0

Proof:

S1 = 1 + 2 + 3 + 4.....

S2 = 0 + 1 + 2 + 3 + 4...

S1 - S2 = (1 + 2 + 3 + 4 + ...) \- (0 + 1 + 2 + 3 + ...) = 1 + 1 + 1 + 1....

S2 == S1 (by definition, since all you're doing is adding a 0) => S1 - S2 == 0

Therefore __0 = 1 + 1 + 1 + ..... __

Obviously this is pure nonsense. You can 't just "shift things around" and use
elementary mathematics when dealing with infinite series that don't converge.
Maybe there's a more convincing proof out there, but the one they presented in
the video is bogus.

~~~
zeroer
> Obviously this is pure nonsense.

This is begging the question. Why can't 1 + 1 + ... = 0?

Also, I wouldn't be so sure that S2 == S1. You can't re-arrange infinitely
many terms in an infinite series and still be guaranteed the sum is the same.

~~~
whack
S2 is identical to (0 + S1). Are you suggesting that (0 + S1) != S1 ?

~~~
zeroer
Precisely. Adding 0 to the front of an infinite series is shifting every term
by one to the right. It's not clear that shifting terms in series keeps the
sum the same. For instance, re-arranging infinitely many terms in
conditionally convergent infinite series changes the sum.

~~~
whack
> Adding 0 to the front of an infinite series ... not clear that ... keeps the
> sum the same

I don't know if I would go that far... but I agree with the general spirit of
your comment. Which is also the point of my original post. If you think that
my appending a zero calls my proof into question, the proof presented in the
video takes far more dubious and horrific liberties.

------
CorvusCrypto
From my point of view, what Numberphile did was wildly successful. I think the
video's "wow" factor, as the author put it, is doing its job of getting people
interested in maths. Yes they could have explained some things a little
better. However as noted by the author and some of her other field-mates she
referred to at the bottom of the article, I think really the only sore people
are the mathematicians. I also think that this is because they already know
the rest of the story behind this interesting result. Once we know something
in detail we as humans tend to scoff at incomplete explanations because in our
eyes it does injustice to the topic.

However, to the normal viewer this video probably made maths look incredibly
interesting and more than likely even caused them to research it a bit more. I
would hazard to say an article like the one Dr. Lamb wrote would not have that
same effect, though it is technically more correct. Numberphile to me is more
about reinstating the interest in maths in a society where you are usually
introduced to the topic by doing repetitive, seemingly impractical
calculations and this video of theirs as referenced in the article has
definitely done that.

~~~
foofoo55
The 6 or so people I know who have seen the original Numberphile video
responded with statements like "I'll never understand math", "that's just
stupid", and "I don't get it". None expressed a greater interest in maths as a
result; quite the opposite.

Just like a good host at a party, a good educational video should leave the
viewer felling positive and better about themselves. So clearly, in this case,
it depends on the audience.

~~~
knownothing
> Just like a good host at a party, a good educational video should leave the
> viewer felling positive and better about themselves.

Is the purpose of education to make the other person feel positive and good
about themselves?

~~~
ajmurmann
No. However, unless you have a captive audience you will quickly lose your
education opportunity of you don't. Even with a captive audience you are
likely to achieve better results if your students enjoy the topic and can feel
like they are accomplishing something. Motivation matters as it turns out.

------
numlocked
Reading a couple of the linked posts, I thought this was the best take on it
from mathematician Jordan Ellenberg (it certainly made sense to my comp-sci
brain):

    
    
        It's not quite right to describe what the video does as “proving” that
        1 + 2 + 3 + 4 + .... = -1/12. When we ask “what is the value of the
        infinite sum,” we've made a mistake before we even answer! Infinite
        sums don't have values until we assign them a value, and there are
        different protocols for doing that. We should be asking not what IS
        the value, but what should we define the value to be? There are
        different protocols, each with their own strengths and weaknesses. The
        protocol you learn in calculus class, involving limits, would decline
        to assign any value at all to the sum in the video.  A different
        protocol assigns it the value -1/12. Neither answer is more correct
        than the other.

------
mcbits
I think a good analogy to make this seem less mysterious would be the various
methods of determining an average. E.g. we have no problem saying the "average
family" has 2.4 children despite the impossibility of any family having 2.4
children. The number is just useful in other calculations for income,
expenses, etc.

From there it's not hard to imagine that the -1/12 result could be useful and
justifiable as an intermediate step in other calculations despite being an
"impossible" destination on its own.

------
MichaelBurge
No need to invoke the Riemann Zeta function and analytic continuation. In high
school they summed this series(valid for x < 1):

1 + x + x^2 ... = 1/(1-x)

Plug in x = 2 to get:

1 + 2 + 4 + 8 ... = -1

There's a million of these series in your dusty old Calculus textbook. Or you
could look in 'generatingfunctionology' to find others.

Complex analysis makes it more interesting, because the additional Cauchy-
Riemann constraints make the solution unique. And so people are more willing
to say that the unique solution is the "true" answer.

Really, I say this is just another example of the complex numbers being weird.
I took a couple different courses in it(one in college, 2 online) because it
was clear something interesting was happening with them, but there was never a
unifying theme. I can kind of spot a rule like "analytic functions preserve
90-degree angles" in the Cauchy-Riemann equations, but it hardly explains all
the crazy theorems.

That series is just another example of someone taking a well-behaved series,
analytically continuing it into the complex numbers, and now it's clear
something interesting is happening but it's not clear what.

~~~
thaumasiotes
Who says it's only valid for |x| < 1? That series isn't just a cute bit of
trivia, it is the basis for signed integer mathematics in your CPU. "True"
enough for you?

~~~
MichaelBurge
Are you doing something like this:

* Define the series over the 2-adic numbers rather than the reals

* Associate every 2-adic integer with its corresponding sequence in the inverse limit Z/2^nZ for n = 1..infinity

* Truncate the sequence to a certain precision(an index i), and make an equivalence class a ~ b if a and b agree on the first i elements. You get a field Z_(2^n), and identities in 2-adic analysis should carry down to the equivalence classes. Here 1 + 2 + 4 ... = -1 in the 2-adics.

That isn't the way I would've thought about signed integers, but it seems like
it would work.

~~~
thaumasiotes
It's not necessary. Just interpret the bits as coefficients of powers of 2, in
a purely conventional manner. Say that leading digits match the high bit
rather than being 0 regardless of the high bit. And you're done, you have
working signed arithmetic that displays all the behavior you expect and
conforms to the geometric series equality. Here 1 + 2 + 4 + ... = -1 in the
integers.

------
amelius
This is what you get when mathematicians start "hacking". Instead, they should
have invented a proper notation conveying what is really meant by the sum and
the "..." ellipsis. This notation apparently does not cut it.

~~~
MichaelBurge
ζ(s) has been in use since 1859, when Riemann introduced it. Riemann's paper
isn't as thorough as it could've been, but it seems clear he knew exactly what
he was doing, being one of the founders of Complex Analysis:

[http://www.claymath.org/sites/default/files/ezeta.pdf](http://www.claymath.org/sites/default/files/ezeta.pdf)

This series is ζ(-1). I don't know about the Youtubers or Scientific American,
but mathematicians studying Complex Analysis know exactly what it means.

------
beefsack
As soon as I see people dropping GIFs and image macros into intelligent
discussions like this, I can't help but immediately become sceptical of what
they're saying.

It happens a lot in fairly serious technical computing blog posts and I've
been trying to wrap my head around why people do it.

~~~
ColinWright
Many times now when people have looked at my carefully written, carefully
reasoned, well-laid out writings, have gone:

    
    
      Aaarrrggghhhh !!!
    
      WALL OF TEXT !!!
    
      Aaarrrggghhhh !!!
    

It seems that many people need humorous (for some definition of "humorous")
images and animations to make them think tat what they are reading is
entertainment. I _hate_ it, but it is an increasing trend, and I'm not
surprised.

Disappointed, but not surprised.

~~~
91823791
I am surprised, because this presentation style used to be restricted to
children's books. I was unable to finish this article because I refuse to have
Riemann mixed up with memegenerator.net.

~~~
nommm-nommm
Unable is not the same as unwilling.

~~~
ars
I had a hard time as well - the moving image was very distracting, I had to
cover it to try to concentrate on what the text was saying.

~~~
nommm-nommm
That is not what the comment I was replying to said. There was a "refusal" to
mix academic content with Internet memes.

------
jiiam
No.

Source:
[https://en.wikipedia.org/wiki/Betteridge%27s_Law_of_Headline...](https://en.wikipedia.org/wiki/Betteridge%27s_Law_of_Headlines)

EDIT: Some further elaboration: I'm sick of the question. The answer is: not
in any sense that would be meaningful to the people to whom this stuff is
being told. You're just being misleading by implying that the sum of positive
integers can converge. I don't want to hear any "But if you take this analytic
continuation..." or "But in a certain sense...", they're just misleading as
the thousands of proofs that 1=0.

------
olsgaard
In the numberfile video he doesn't explain the way he calculates 2・S_2. He
says he shift it, but doesn't explain why that is valid.

    
    
        Shifted version:
             1-2+3-4+5-6 ...
               1-2+3-4+5 ...
        sum: 1-1+1-1+1-1 ...
        
        Multiplied version:
             2-4+6-8+10-12 ...
    

The multiplied version shifts between +(2n) and -(2n). Following the logic
that S_1 = 0.5, because that is the average between 0 and 1, I would argue
that the multiplied version of S2 should equal 0, as that is the average
between a postitive constant and its negative (but the variance is going to be
infinite. Doesn't that have a say?).

What if we triple shift?

    
    
        Triple shifted version:
             1-2+3-4+5-6+7-8+9 ...
                   1-2+3-4+5-6 ...
        sum:     2-3+3-3+3-3+3 ...
    

Look! now 2・S2 is equal to 2!

~~~
diogofranco
I don't think your math adds up.

Your triple shifted version would be between -1 or 2 depending on the cut
right? So, still 1/2.

~~~
olsgaard
Yeah, that was a brainfart on my end.

What about the multiplied version? That is how I would intuitively understand
2S_2, and I still don't accept that shifting is the same.

------
egjerlow
I'm gonna be contrarian and say, yes it does - it's been physically proven!
The casimir effect, as other people here have mentioned, depends on this being
true, and it has been experimentally tested.

To me it's one of those things where you just go 'damn' because of the
perplexing relations that exist between math and physics. If anything hints at
what the hell goes on in this universe, to me it's stuff like this.

~~~
quantumhobbit
The analogy to complex numbers is really useful. Of course sqrt(-1) doesn't
exist. But if we just pretend that is does exist, we can build a rigorous
theory out of these imaginary numbers. Once we do that we notice that these
numbers are really useful for calculating real physical things. So maybe
imaginary numbers aren't so imaginary.

Same with these infinite sums. By the math you learn in middle school, you
can't have infinite sums. But break the rules for just a second and again we
have something that is helpful with real physics.

~~~
chowells
I don't know what you mean by sqrt(-1) not existing.

What does it even mean to exist?

Far better to talk about whether a number is defined _in a particular
numerical system_.

In the real numbers, sqrt(-1) isn't defined. But why privilege the real
numbers as "existing"? Despite an official-sounding designation, they're very
deeply weird.

The real numbers are famously uncountable. But any subset of them that can be
enumerated is by definition countable.

Think about the consequences of that for a moment. No matter what you do, the
subset of the reals you can enumerate is countable, meaning the subset you
can't enumerate is uncountable. In a rather flippant way, you could describe
the real numbers as "mostly useless." Most of them exist to make some theorems
work, rather than being a number that you could ever use to describe anything
- solely because describing the number would require an infinite amount of
information.

In a pretty significant sense, it's valid to say that the real numbers are
mostly figments of analysts' imagination.

If they "exist", might as well say complex numbers exist too. They're actually
more useful in physics than real numbers are.

~~~
quantumhobbit
I'll admit that the concept of numbers "existing" here is quite poorly
defined. I was trying to capture the effect of people upon encountering
complex numbers for the first time to just sort of shut down and refuse accept
that they are "real" ( pun intended). It is hard to remember how weird complex
numbers feel after years of middle school math teachers saying that you can't
take the square root of a negative.

Similarly weird feeling is encountering zeta(-1) = -1/12 after years of
calculus teachers telling you to ignore divergent sums because they are
infinite.

~~~
sebastos
I think the point is that if that's true, the concept of complex numbers is
being taught incorrectly. If you tell a student that this number is fake but
useful, what are they to make of that? It just starts to make mathematics seem
spooky and unpredictable.

When you're first learning about imaginary numbers in 8th or 9th grade, the
answer to "what is sqrt(-1)?" _should_ be undefined. If you claim otherwise,
you're pulling the rug out from under their feet, because the number system
that they are familiar with indeed has sqrt(-1) undefined.

Instead, the teacher should go on to introduce a new system of mathematical
objects that have certain rules, and the students could play around with them
and see how they have two components, how you can plot those two components in
2 dimensions, how you can think of them as arrows sticking out of the origin,
how you can combine their components to rotate each other, etc. Then work
backwards into showing that we can call these objects complex numbers for
short, because those operations are similar to addition, multiplication, etc.
And finally, just as a curiosity, you can see that sqrt(z) = i for z = -1 +
0i.

There's no need to introduce this whole concept of an imaginary number line
that points off in a direction nobody can see or measure. The whole takeaway
should be that you can't just square real numbers get negatives. If you have
something that can "multiply" by itself to get its own inverse, then you have
either overloaded the multiplication operator with something very very
different, or you're dealing with an object that can "rotate" through another
dimension. It's an ordinary two dimensional space, and the only difference
between the two axes is their name, just like "x" and "y". In my opinion, this
lesson should actually be reassuring to a young mathematical intuition:
there's only so many ways to skin this cat.

~~~
nkurz
You're likely aware of it, but in case others aren't, there's a classic online
presentation that makes the same case:
[https://betterexplained.com/articles/a-visual-intuitive-
guid...](https://betterexplained.com/articles/a-visual-intuitive-guide-to-
imaginary-numbers/)

And here's some "ancient" HN commentary on that article:
[https://news.ycombinator.com/item?id=2712575](https://news.ycombinator.com/item?id=2712575)

------
TheCondor
It doesn't satisfy the Cauchy criteria, it can't converge.

The trick step of assuming that 1+0+1+0.... converges is also not Cauchy.

It's interesting and string theory and some other physics get results by using
-1/12 but it's not strictly correct. No more than the omni proof

------
pesenti
For the mathematically inclined, Terence Tao's explanation provides an easier
intuition of the relationship using real-variable methods:
[https://terrytao.wordpress.com/2010/04/10/the-euler-
maclauri...](https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-
formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-
continuation/)

------
psyc
The other YouTube popularization that regularly irks me is when people beam
about how "there are lots of different sized infinities", without explaining
that when mathematicians use the word "size" and "infinity" in that context,
it means _cardinality_ (technical term) of an infinite set, within an
explicitly defined set theory that includes the axiom of infinity (all
technical concepts).

------
vaidhy
I had found this YouTube video before that explained it in a way I can
understand -
[http://m.youtube.com/watch%3Fv%3DjcKRGpMiVTw](http://m.youtube.com/watch%3Fv%3DjcKRGpMiVTw)

~~~
CarolineW
I think there's something wrong with the link you've provided:

    
    
        > Our systems have detected unusual
        > traffic from your computer network.
        > Please try your request again later.
    

Perhaps this is the video you intended:

[https://www.youtube.com/watch?v=jcKRGpMiVTw](https://www.youtube.com/watch?v=jcKRGpMiVTw)

~~~
ryanschneider
I came here to post this video as well, Mathologer is a great channel, I often
prefer his explanations to Numberphile's. Highly recommended.

------
xg15
I want familiar before with the concept of analytic continuations and so this
part really confused me:

 _( "The" is the appropriate article to use because the analytic continuation
of a function is unique.)_

Let f: N -> N, f(x) = x be a function on the natural numbers. Then I could
define two functions g(x) and h(x) on N u {foo} that behave just like f for
natural numbers. However, g(foo) is 42 while h(foo) is 666. Wouldn't g and h
both be valid analytic continuations of f in N u {foo}, according to the
definition of analytic continuations explained in the article?

I'm wondering about that as the uniqueness seems to be an important property
for the rest of the explanation, yet it is simply assumed here without any
further explanation.

~~~
Bahamut
Uniqueness requires proof - it relies on a theorem/lemma having to do with if
an analytic function is constant in a region, then it is constant everywhere.
Uniqueness then arises as if there are two analytic continuations that are the
same in that region, then the difference is 0 there, and thus 0 everywhere,
resulting in the two being the same.

That is a non-trivial theorem/lemma though, and the proof is typically gone
through at a graduate complex analysis course.

------
lisper
[http://blog.rongarret.info/2014/01/no-sum-of-all-positive-
in...](http://blog.rongarret.info/2014/01/no-sum-of-all-positive-integers-is-
not.html)

~~~
nkurz
Encountering it now, I think your followup article "Does it matter if the sum
of all integers is -1/12?" makes an excellent case for why it does in fact
matter a lot:

 _The fact that the unsound reasoning in this particular case led to a
conclusion that superficially resembles a conclusion that can also be arrived
at by sound reasoning just makes it that much worse. It encourages people to
think: because this mode of reasoning led to a "correct" conclusion in that
case, then it will probably lead to correct conclusions in other cases._

 _If the problem were confined to mathematics I might not make such a big deal
out of it, but it 's not. The problem of people uncritically accepting
conclusions drawn by unsound methods of reasoning pervades our society and
causes real damage._

[http://blog.rongarret.info/2014/01/does-it-matter-if-sum-
of-...](http://blog.rongarret.info/2014/01/does-it-matter-if-sum-of-all-
integers.html)

------
mcv
As it happens, I have a proof that 1 == 2. Of course my proof involves a step
that involves division by zero, but if it's legal to define undefined results,
then why can't I do that if it helps my proof?

Because as far as I can tell (with my admittedly limited understanding of
mathematics), that's basically what's going on here: they define the result of
1-1+1-1+1-1... to be 1/2, which it can of course never be. The result is never
1/2; it's either 1 or 0. Taken to infinity, the only reasonable definition for
that sum is undefined. If I can say that 1/2 is fine too, then I should also
be able to attach my own definition to 1/0.

Also, if string theory really relies on such questionable mathematical steps,
then that would make me question string theory even more. As far as I
understand, string theory makes no testable predictions, which suggests to me
that no results based on this questionable mathematical trick have been
experimentally verified. If there is some real, experimentally verified
physics that relies on the sum of all natural numbers to be -1/12, then I'd
love to be corrected (though I doubt I'll understand it).

------
rrmm
Terry Tao shows how to get from side of equality to the other:

[https://terrytao.wordpress.com/2010/04/10/the-euler-
maclauri...](https://terrytao.wordpress.com/2010/04/10/the-euler-maclaurin-
formula-bernoulli-numbers-the-zeta-function-and-real-variable-analytic-
continuation/)

It goes into detail and shows the derivation and internal consistency of the
method.

------
throw94
From Zen and the Art of Motorcycle Maintenance <i> “The law of gravity and
gravity itself did not exist before Isaac Newton." ...and what that means is
that that law of gravity exists nowhere except in people's heads! It 's a
ghost!" Mind has no matter or energy but they can't escape its predominance
over everything they do. Logic exists in the mind. numbers exist only in the
mind. I don't get upset when scientists say that ghosts exist in the mind.
it's that only that gets me. science is only in your mind too, it's just that
that doesn't make it bad. or ghosts either." Laws of nature are human
inventions, like ghosts. Law of logic, of mathematics are also human
inventions, like ghosts." ...we see what we see because these ghosts show it
to us, ghosts of Moses and Christ and the Buddha, and Plato, and Descartes,
and Rousseau and Jefferson and Lincoln, on and on and on. Isaac Newton is a
very good ghost. One of the best. Your common sense is nothing more than the
voices of thousands and thousands of these ghosts from the past.” </i>

~~~
j1vms
To go a bit further, _everything_ as one perceives it is necessarily in one's
mind (or nervous system, to be precise). One consciously or unconsciously
determines that something is happening outside of their mind, which makes that
determination simply a philosophical one. This is a reminder to all, as to why
the highest degree granted at any conventional institution is labelled a
Doctorate of Philosophy in a given subject.

I would go as far as to say that just as the Uncertainty Principle prescribes
a limit on what is knowable in quantum physics, so does philosophy suggest
limits on what will ever be rationally proven through human perception, ideas,
and knowledge.

------
Tergmap
This was in one of Rumanajan's textbooks that he sent to Hardy. It changes the
semantics of established notation instead of invnting new.

------
lifthrasiir
Somehow the title is missing an ellipsis, which made me confused for a second.

~~~
CarolineW
Now added - apologies for the confusion.

------
dahart
Good time to remember 0^0=1. Or some people (like Knuth) like to assign that
definition because it's convenient, and it makes everything work out in
certain situations.

I don't think it's bad to say that 0^0=1 when you hear the whole story, which
is that there are multiple right answers given how we define exponents and
operations on 0. But it's misleading to say 0^0=1 and stop there. Similarly,
after reading about this today, it's making more sense to me that we can, if
we choose, define 1+2+3+...=-1/12, and it makes sense in some contexts. It's
just that this isn't the only answer and it isn't the whole story.

I liked the article, and ended up reading a bunch of Evelyn's column earlier
today. I see some people complaining about the pictures... I thought they were
funny and relevant, my only nit pick is I can barely read the light blue text
on the wink gif, the colors are so hard for me to look at.

------
hhjkjhkjhhlkj
Animated gifs made me ill. Couldn't read article :(

------
empath75
As a computer programmer, this actually feels fairly intuitive. If you had a
function that computed the sums of convergent series and returned errors for
inputs for which it was undefined, and replaced it with one that computed the
reimann zeta function, you'd still get the same results for convergent series,
but would get results for the formerly undefined inputs. Thought of this way,
it's pretty clear that you're talking about performing two different
calculations, and they aren't equal.

------
owenversteeg
Glad SciAm is helping to spread correct info. I think that basically, the
problem with many math videos in this style is that they tend to make
uncommon, if interesting, assumptions without explaining what they are or why
they make the assumption in order to get clicks. This is my comment from the
last time this video was mentioned:

> "The sum of the series 1+2+3+4+5+6... = -1/12" is patently false, without a
> previous assertion that we have assumed the Cesàro sum of a series is equal
> to the series. Even mathematicians working with Cesàro sums surround such
> statements with "this holds only if we interpret the infinite sum defining Z
> to be the Cesàro sum..." [0] Precisely none of the times I've heard the
> "1+2+3+4...=-1/12" bullshit has the person stating it prefaced their
> statement with "this holds only if we interpret the infinite sum defining Z
> to be the Cesàro sum..."

> If you say that "1+2+3+4...=-1/12" without stating your prior assumptions,
> you suddenly allow anyone to make any assumption whatsoever, no matter how
> obscure it is. In your imaginary world, someone could walk into a store and
> claim that "this 95 cent pack of gum is free" because they just made the
> unstated assumption that all non-integers do not exist, and seconds later
> they could return it for a full refund of $0.95 after making the unstated
> assumption that in fact the rational numbers do exist. Numbers, and in fact
> the entire system of mathematics fail to work at all once you allow
> arbitrary, unstated assumptions no matter their obscurity. And in fact, the
> assumption that non-integer numbers do not exist is made far, far more
> frequently than the assumption that the infinite sum defining the sequence
> is the Cesàro sum.

> The only difference is that assuming the non-integer numbers do not exist is
> a defensible assumption in many, many scenarios... but Cesàro summations are
> only invoked about twelve times a year, in pure math or advanced physics
> papers.

> [0] Madras, Neal. "A Note on Diffusion State Distance." arXiv preprint
> arXiv:1502.07315 (2015).

My favorite post on the subject still has to be this:
[http://goodmath.scientopia.org/2014/01/17/bad-math-from-
the-...](http://goodmath.scientopia.org/2014/01/17/bad-math-from-the-bad-
astronomer/)

------
TheRealPomax
tl;dr: there is no "real" or "one" kind of maths, you'll have to first pick
which field of mathematics you're working with. Arithmetics? Then no:
1+2+3+... is a divergent sum. Using calculus, specifically analytic
continuation on the an Euler series? Then yeah, you can apply the rules in
such a way that you get this answer and it's a mighty useful identity that can
be exploited in complicated proofs.

------
JoeCamel
I loved this video
[https://www.youtube.com/watch?v=XFDM1ip5HdU](https://www.youtube.com/watch?v=XFDM1ip5HdU)
"An exploration of infinite sums, from convergent to divergent, including a
brief introduction to the 2-adic metric, all themed on that cycle between
discovery and invention in math."

------
chris_wot
So when you have an infinite series of the form 1-1+1-1+1-1+... then because
there is no single point of convergence and so you average the two values 0
and 1 to give 0.5.

Why does an average of the two values that you can converge on help?

Also, what about something like:

sin(π/2) + sin(3π/2) - sin(5π/2) + sin(7π/2) - ...

what would this be?

~~~
tamana
The average is simply a _summary_ description of the behaviorat infinity.
Averages are a standard way to summarize complex phenomena.

More precisely, the sume is 0.5+/-0.5, which is greater than 0 and less than 1
and ambiguous in between.

------
Chinjut
I didn't think the Numberphile video on this was well done, in its "Math is a
bunch of inscrutable magic you laypeople will never understand, and every
attempt you (as represented by our camera man) make to voice your dumb
intuitions is dumb, you big dumb dumbs!" way, but I even more was annoyed by
the sneering "You're not allowed to do that! There are clear fixed rules and
single, permanent, all-purpose definitions!" dismissals of the 1 + 2 + 3 + 4 +
... = -1/12 result in the backlash.

I'm going to copy and paste the explanation I originally wrote at Quora
([https://www.quora.com/Whats-the-intuition-behind-the-
equatio...](https://www.quora.com/Whats-the-intuition-behind-the-
equation-1+2+3+-cdots-tfrac-1-12/answers/3900029)), because I think it
captures well everything I'd like to say about this at every level of the
discussion:

The sense in which 1 + 2 + 3 + 4 + ... = -1/12 is this:

First, consider X = 1 - 1 + 1 - 1 + .... Note that X + (X shifted over by one
position) = 1 + 0 + 0 + 0 + ... = 1. Thus, in some sense, X + X = 1, and so,
in some sense, X = 1/2.

Now consider Y = 1 - 2 + 3 - 4 + ... . Note that Y + (Y shifted over by one
position) = 1 - 1 + 1 - 1 + ... = X. Thus, in some sense, Y + Y = X, and so,
in some sense, Y = X/2 = 1/4.

Finally, consider Z = 1 + 2 + 3 + 4 + ... Note that Z - Y = 0 + 4 + 0 + 8 +
... = (zeros interleaved with 4 * Z). Thus, in some sense, Z - Y = 4Z, and so,
in some sense, Z = -Y/3 = -1/12.

In contexts where the above reasoning is applicable to what one wants to call
summation, we have that 1 + 2 + 3 + 4 + ... = -1/12\. In other contexts, we
don't.

That's it. It's that simple. Everything else I'm going to say is just to
comfort those who are uncomfortable with the game we've just played.

Note that I've said "in some sense" several times in the above argument.
That's because, while we all know how to add and subtract a finite collection
of numbers in the ordinary way, when it comes to adding and subtracting an
infinite series of numbers, there are many different ways of interpreting what
this should mean. Just knowing how to add finitely many numbers doesn't
automatically tell us what it means to add a whole infinite series of them.
And when it comes to summation of infinite series, it turns out there's not
just one nice notion of "summation"; there are many different ones, which are
nice for different purposes.

One such notion is "Keep adding things up, one by one, starting from the
front, and see if the results get closer and closer to some particular value;
if so, that value is the sum". On that account of what summation means, you
clearly won't get any finite answer for 1 + 2 + 3 + 4 + ...; since the terms
never get any smaller, the partial sums will never settle down to a finite
value (and certainly not a negative one like -1/12!). They instead, in a
natural sense, should be understood as summing to positive infinity.

And there's nothing wrong with this! You are not wrong to feel that 1 + 2 + 3
+ 4 + ... is positive and infinite, and math does not deny this; there
absolutely is an account of summation corresponding to this intuition.

It's just not the only account of summation worth thinking about.

We could instead consider other notions of "summation", including ones
designed precisely so that arguments like the one we made at the beginning
(which are very natural arguments to make!) counted as legitimate ways to
reason about such "summation". And then, by definition, we will have that 1 +
2 + 3 + 4 + ... = -1/12, on such accounts of "summation". (In doing so, we
will lose certain familiar properties such as "A sum of positive terms is
always positive". But this is how generalizations work; generalizations very
often lose familiar properties. Even the textbook, limit-based account of
infinite summation loses familiar properties like "The order of summation
doesn't matter". Even finitary summation of integers loses the familiar
property "If a sum is zero, so are all the summands" from basic counting. But
there is a web of resemblances to more familiar kinds of summation which can
justify, in certain moods, thinking of each of these generalizations as a form
of summation itself.)

If you insist that "Keep adding things up and see if the results get closer
and closer to some particular value" is the only account of summation you're
interested in, you'll object to the argument we gave at the beginning, saying
"You're not allowed to do that kind of shifting over and adding to itself
reasoning all willy-nilly; look at what nonsense it produces!".

But it _can_ be made sense of, and is even fruitful to make sense of, in
certain contexts in mathematics, and there is no need to blind ourselves to
this insight.

Again, that's it. It's that simple. Everything else I'm going to say is just
to comfort those who are _still_ uncomfortable. For those who want a more
systematic, formal account of series summation of a sort which validates the
above manipulation, read on:

[Comment too long, will be continued in reply]

~~~
Chinjut
[Continuation of original comment]

We can look at it this way: We can try to assign values to a non-absolutely
convergent series by bringing its terms in at less than full strength,
producing an absolutely convergent series, and then increasing the terms'
strengths towards full strength in the limit, observing what happens to the
sum in the limit as well.

This is the idea behind the traditional account of series summation, mind you:
at time T, we bring in all the terms of index < T at 100% strength and all
other terms at 0% strength. This gives us our partial sums, and as T goes to
infinity, each term's strength goes to 100%, so we can consider the partial
sums as approximating the overall sum.

But we don't have to be so discrete as to only use 100% strength and 0%
strength. We can try bringing in terms more gradually. For example, rather
than having strengths discretely decay from 100% to 0% at some cut-off point,
we can instead have the strengths decay exponentially in the index. (So at one
moment, we may have the first term at 100% strength, the next term at 50%
strength, the next term at 25% strength, etc.). Then we consider what happens
as the rate of exponential decay slows, approaching no decay at all.

In symbols, this means we assign to a series a0 + a1 + a2... the limit, as b
approaches 1 from below, of a0 * b^0 + a1 * b^1 + a2 * b^2 + .... Put another
way, the limit, as h goes to 0 from above, of a0 * e^(-0h) + a1 * e^(-1h) + a2
* e^(-2h) + ..., where e is any fixed base you like. (Let's take e to be the
base of the natural logarithm for convenience, and call this function of h the
characteristic function of the series).

Again, this is not so different than the traditional account of series
summation; we're just using exponential decay rather than sharp cutoff in our
dampened approximations to the full series. (Actually, for the results we're
interested in, it's really just the smoothness of the decay that's of
interest. We could use other forms of smooth decay as well, and get the same
results, but exponential decay is so convenient, I won't bother discussing in
any further generality right now)

Now we've turned the question of determining the value of a series summation
into the question of determining the limiting behavior of some function at 0.

Well, it's easy to determine limiting behavior at 0. Just write out a Taylor
series centered at 0, and drop all the terms of positive degree, leaving only
the term of degree 0. Boom, you've got the value of the function at 0.

Except... suppose the Taylor series has a few terms of negative degree as
well. (As in, say, 5h^(-1) + 3 + 4h^2). Then the behavior at 0 isn't given by
the degree 0 term; rather, the behavior at 0 is to blow up to infinity!

And, indeed, we'll find that this is precisely what happens when we look at
the characteristic function of a series like 0 + 1 + 2 + 3 + ...; we get that
f(h) = 0e^(-0h) + 1e^(-1h) + 2e^(-2h) + 3e^(-3h) + ... = e^(-h)/(1 - e^(-h))^2
= h^(-2) - 1/12 + h^2/240 - h^4/6048 + ....

Note that there is a negative degree term there. So in a very familiar sense,
we can say that the behavior of this series is to blow up to infinity.

However, since any time a series DOES converge in the ordinary sense, the
value it converges to is the degree 0 term of this characteristic function, it
is very tempting and fruitful to think of the degree 0 term as the sum even
when there are those pesky negative degree terms.

And in this more general sense, we see that the value of 0 + 1 + 2 + 3 + ...
is that degree 0 term of f(h): -1/12\. [In fact, we can understand the
argument at the beginning of this post as outlining a rigorous calculation of
this degree 0 term. (See [https://www.quora.com/Mathematics/Theoretically-
speaking-how...](https://www.quora.com/Mathematics/Theoretically-speaking-how-
can-the-sum-of-all-positive-integers-be-1-12/answer/David-
Joyce-11/comment/3444455) to see this spelt out)]

Now, you _can_ propose other manipulations to produce other answers for this
series in other ways, but this is one particular systematic account of
summation which leads to this value alone and no other. [That is, for the
series whose nth term is n. I should warn that, in the presence of negative
degree terms in the characteristic function, this method is sensitive to
index-shifting, so we would get different results if, for example, we
considered 1, 2, 3, ... to be not the 1st, 2nd, 3rd, ..., terms, but rather
the 0th, 1st, 2nd, ..., terms, respectively.]

Why should you care about this particular account of summation? Well, you
don't have to; I can't force you to care about anything. But it's fairly
natural and comes up with some significance in mathematics. It is, in a
certain formal sense, precisely the account of summation which allows one to
interpret the sum 1^n + 2^n + 3^n + ... for general complex n, yielding the
Riemann zeta function (of great significance in number theory, and whose
behavior (specifically, the Riemann hypothesis concerning its zeros) is
generally considered one of the most important open problems in mathematics).
So, you know, there's reason for some people to care about it, even if you
don't.

------
blueprint
No, it doesn't.

------
quantumhobbit
I forget who said, "In mathematics, you don't learn things, you just get used
to them." But it applies here.

~~~
CarolineW
According to this link[0] it was John von Neumann:

    
    
        Young man, in mathematics you don't
        understand things. You just get used
        to them.
    

[0]
[https://en.wikiquote.org/wiki/John_von_Neumann](https://en.wikiquote.org/wiki/John_von_Neumann)

------
pfortuny
It boils down to what you mean by "equal."

------
ojknkjnsdf
I don't understand why people keep playing with infinite series like this...
it's not sound math and doesn't lead to anything useful. Just a cheap way for
worthless mathematicians to feel clever.

------
powertower
Is it saying that the complex version of the series (1+0i)+(2+0i)+(3+0i)... =
-1/12?

------
serge2k
Knew it was that numberphile video before I clicked.

