
Proof Claimed for Deep Connection between Prime Numbers - shashashasha
http://www.nature.com/news/proof-claimed-for-deep-connection-between-primes-1.11378
======
impendia
I personally know Brian Conrad (quoted in article). He has an encyclopedic
knowledge of algebraic number theory and algebraic geometry, I would say that
he's in the top ten people worldwide who could reasonably assess whether
Mochizuki's proof is correct. And he has a great nose for bullshit, and little
patience for it.

He is taking this claim seriously. He doesn't necessarily believe it's correct
(presumably he's a bit careful in what he says to the press), but he seems to
think it has a shot, and is worth paying attention to.

~~~
TallboyOne
I think it's fascinating there are only ~10 people on this earth 'smart
enough' to get the proof. That is seriously some next-level stuff going on...

~~~
philip1209
"Reasonably Assess" != "Smart enough"

Presumably, it is experience in algebra, established reputation in field, and
familiarity with communication of math that allow this subset of people to
reasonably assess the papers. It is not to say that the theorem is so advanced
that only 10 people on earth possess the aptitude to understand it - it is
that only 10 people have the foundation on which to understand the theorem due
to its specificity.

~~~
mreid
Right, because "experience in algebra, established reputation in field, and
familiarity with communication of math" can be acquired by anyone with only a
modicum of smarts.

I think you are underestimating just how intricate and challenging
understanding a proof can be. Especially one of this length and by someone of
this standing.

~~~
lmm
Reputation in the field is not a proxy for general intelligence. That there
are only 10 people with the reputation to make it worth them assessing the
proof does not mean those are the only 10 people with the smarts to understand
it.

And yes, a large part of being able to work in a specialized field like this
really is about having the specialized knowledge and vocabulary, rather than
general intelligence. If this is anything like most proofs in mathematics,
most reasonably intelligent people would be able to understand it given ten
years of studying the field. It's not about being smart enough, there's just
so much existing knowledge that you need a lot of specialization to be able to
really contribute to any given area.

~~~
philip1209
I do not assert that reputation is a proxy for general intelligence. However,
reputation is a cornerstone of becoming a peer reviewer, and it takes peer
review for an assertion to become science. Therefore, my assertion is more so
that having reputable academics backing the paper will decrease the equivocacy
of its assertions and aid its adoption rather than assert that reputation
functions as a proxy for intelligence.

------
tlb
Changed to the Nature article that this is a repost of. The SA article had
borked formatting, losing superscripts.

~~~
shashashasha
Thanks, apologies for the poorly formatted link!

------
colanderman
_It states that for integers a+b=c, the ratio of sqp(abc)^r/c always has some
minimum value greater than zero for any value of r greater than 1. For
example, if a=3 and b=125, so that c=128, then sqp(abc)=30 and sqp(abc)^2/c =
900/128. In this case, in which r=2, sqp(abc)^r/c is nearly always greater
than 1, and always greater than zero._

Obviously I'm reading this wrong -- because as stated (and assuming that a, b,
and c are positive integers) this seems trivially true -- sqp(abc) cannot be
zero, r cannot be negative, and c is finite, so therefore sqp(abc)^r/c is
greater than zero, QED.

Does _Nature_ mean that the quantity does not _approach_ zero as r tends to
infinity (or some such)? Their example sure doesn't seem to indicate such.

~~~
ipince
Yes, sqp(abc)^r/c is greater than 0. But what we are saying here is that it is
_bounded_.

The formulation in the article is a little confusing/non-intuitive. A perhaps
more 'layman' explanation is that for every r > 1, there are only finitely
many coprime tuples (a,b,c) such that a+b = c and c > sqp(abc)^r In other
words, in "most" cases, sqp(abc) is greater than c!

This is equivalent to the definition in the article. Why? Well, given a fixed
r, there are finitely many tuples that satisfy c > sqp(abc)^r. Since there are
finitely many, we can pick a constant K > 0 st c < K*sqp(abc)^r for ALL tuples
(a,b,c). Thus:

sqp(abc)^r/c > 1/K > 0, as the article mentions.

~~~
takluyver
Thanks, after reading that through a couple of times, I think I understand
what it means.

Looking back at the article, this is the critical bit:

> the ratio of sqp(abc)^r/c always has some minimum value greater than zero
> for any value of r greater than 1

That looks like it's just saying sqp(abc)^r/c > 0\. But from what you're
saying, the bit about the minimum value (1/K) is important. For each value of
r, it should be possible to define a minimum value greater than 0 which
sqp(abc)^r/c can never be smaller than. Right?

Are those minima interesting in themselves, or just as part of the whole
conjecture? Is there a method for finding the minimum for a given value of r?

~~~
sanxiyn
Your interpretation is right, and those minima are interesting in themselves.
For example, when [http://quomodocumque.wordpress.com/2012/09/03/mochizuki-
on-a...](http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/) was
posted, the very first comment was about that.

"It almost looks as if the method doesn't (as presently written) produce
explicit constants. Hopefully as the ideas become more widely understood,
effective constants will be able to be extracted."

------
magicalist
this is for the abc conjecture[1] (I was thinking maybe they were being
headline-y about the Riemann hypothesis before I clicked through), which, if
proved, would be _extremely_ interesting due to the number of conjectures and
other theorems that have been shown to be equivalent to it or direct
consequences of it.

The proof will be well beyond me, but the conjecture itself is pretty
accessible, as are many of its connections.

This line from the article was confusing:

> Fifteen and 17 are square free-numbers, but 16 and 18 — being divisible by
> 42 and 32, respectively — are not.

but that's supposed to be 4^2 and 3^2, respectively.

[1] <http://en.wikipedia.org/wiki/Abc_conjecture>

~~~
kzahel
Also the statement for Fermat's Last Theorem isn't showing the exponents
correctly (should be a^n+b^n=c^n)

~~~
alttag
It took a moment to puzzle these out (parent and GP's comments). The article
would a great deal clearer if the typography made these clearer.

~~~
3JPLW
Indeed. Since it's simply copy-pasted from Nature News[0] (with attribution),
it may be worthwhile to read it there directly.

I didn't understand that the ratio of import was sqp(abc)^r / c until I
clicked through to the original story.

[0]: [http://www.nature.com/news/proof-claimed-for-deep-
connection...](http://www.nature.com/news/proof-claimed-for-deep-connection-
between-primes-1.11378#b1)

------
mmaunder
Any implications for RSA and other algorithms? [RSA relies on factoring the
product of primes being a hard problem]

~~~
alokm
The claimed proof doesnt directly help us in factoring primes. At least not
now. It has more to do with "structure" of primes.

~~~
DeepDuh
The way I see it Mochizuki's work could turn out to be like Maxwell's theory
of electromagnetism. It could give us a definitive theory on the nature of
prime numbers, but first someone needs to come and understand and solve it. A
'solution' of a differential equation system would be a formula, a 'solution'
of Mochizuki's system would be an algorithm. Many new algorithms could come
out of this, and one of them could be an algorithm for prime factorization.

But: I'm not a mathematician, so maybe I'm just fantasizing right now.

------
gwillen
Warning: This article is mangled by the flattening of all the superscripts in
it. Given that it's about powers, the math is all but unreadable. You have
been warned. :-)

~~~
tlrobinson
Warning: this comment is out of date, the Nature article has correct
formatting :)

------
dudus
From Wikipedia:

> Mochizuki entered Princeton University at age 16 and received a Ph.D. under
> the supervision of Gerd Faltings at age 22. In 2002, he became a professor
> at the Research Institute for Mathematical Sciences in the Kyoto University.

Very impressive.

------
cecilpl
I found the formulation of the theorem difficult to digest, but the wikipedia
version was much clearer:

For every ε > 0, are there only finitely many triples of coprime positive
integers a + b = c such that c > d (1+ε), where d denotes the product of the
distinct prime factors of abc?

~~~
csense
How do you get epsilon to appear in comments? The only docs I could find on
formatting are <http://news.ycombinator.com/formatdoc> and it has no clues.

~~~
lutusp
> How do you get epsilon to appear in comments?

Unicode, of course: ε

When I need a Unicode character, I find it with my Unicode search page:

<http://arachnoid.com/unicode>

Then I copy the character into the document I'm working on. Others have more
efficient ways to enter Unicode from their keyboards, like having a list of
codes and a keyboard layout that allows the codes to be entered efficiently.

~~~
mseebach
I just Google "unicode [name of character]". Hasn't let me down yet.

------
dude_abides
Relevant meta-commentary on the proof by Marty Weissman and Minhyong Kim:

[http://mathoverflow.net/questions/106560/philosophy-
behind-m...](http://mathoverflow.net/questions/106560/philosophy-behind-
mochizukis-work-on-the-abc-conjecture)

~~~
bugsbunnyak
One way to deal with an overzealous mod:

 _Such a great answer proves that the question was worth asking. – Olivier 2
days ago

@Olivier: while I agree with your comment regarding the answer, I woould like
to add that your inference regarding this particular question to me comes
close to saying: the great and heroic work of the fire-fighters prove that it
was worth setting the house on fire. – quid 2 days ago

On MathOverflow, though, it is possible to edit the original house so that it
is no longer inflammatory, while still recording the work of the firefighter.
– Terry Tao 2 days ago_

[edit: formatting]

~~~
nl
The fact that we have a f __k __g Fields medallist having to deal with
overzealous mods makes me sad.

------
bvaldivielso
Ok, I've read [1] that this guy has developed a new set of mathematical
objects and techniques which he is (almost) the only one to understand.

That means that if someone wants to check if the proof is right, he'll first
have to learn how to use all these objects.

I'll guess we won't have a tested proof for some years.

[1] [SPA] [http://gaussianos.com/posible-demostracion-de-la-
veracidad-d...](http://gaussianos.com/posible-demostracion-de-la-veracidad-de-
la-conjetura-abc/)

~~~
saalweachter
I don't expect the new maths to cause a delay from learning so much as from
verifying.

In some sense, proof-checking is a trivial linear-time operation: you check
that each step follows from the last, and if they all do, the proof is
correct. Proof checking is something that computers do well, except that
describing a proof to a computer is usually non-trivial.

Where proof-checkers -- human and otherwise -- will get hung up, especially
with the new maths, is that it may not be obvious how each step follows from
the last.

What will happen is something a bit interactive: if the proof checker cannot
intuitively see the logic in one step of the proof, he may first try to prove
the lemma himself. If he can't, he'll go back to the original author of the
proof and say, "Hey, I don't see how you go from step 518 to step 519. Can you
prove it?" Sometimes the author easily can; he just didn't elaborate on that
step in the proof because it was so obvious to him. Sometimes he can't; either
in trying to prove that lemma he finds a contradiction, in which case the
proof is incorrect, or maybe he just can't prove it, and the proof is ...
incomplete.

It's this last situation which causes these grand proofs to drag out for so
many years. At some point in the 500 pages, there is a jump which the author
didn't notice, and it takes him another few years to prove that jump. The
checkers have a much easier job, because they just have to _try_ to follow the
logic; if they fail, the burden is on the author to make the point at which
they stick more clear.

------
freyrs3
Here's a presentation of the mathematician the article is about. It's on the
nature of "Inter-universal Teichmuller Theory"[1] which apparently was the
work leading up to the proof.

[1] [http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-
universal%20...](http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-
universal%20Teichmuller%20Theory%20---%20a%20Progress%20Report.pdf)

~~~
Q6T46nT668w6i3m
I recommend reading Martin Weissman's explanation from MathOverflow:

[http://mathoverflow.net/questions/106560/what-is-the-
underly...](http://mathoverflow.net/questions/106560/what-is-the-underlying-
vision-that-mochizuki-pursued-when-trying-to-prove-the-abc/106658#106658)

[edit: Oops. Fixed attribution.]

~~~
omaranto
That explanation was not written by Gowers at all (his name appears on it now
that the answer was marked community wiki, becuase he is the most recent
person to edit it, to fix a typo), and indeed although Gowers is certainly
brilliant, this is far from his domain of expertise. That answer was written
by Martin Weissman (username: Marty), the author information is at the bottom
right corner of any answer and here says "Marty 98%".

Also, take a look at Minhyong Kim's answer.

------
samstave
Can anyone explain in lag terms what the implications of this may be.

Just the phrase "deep connection between prime numbers" sounds really
interesting, so, if this were true would there be any practical application of
this in the next (N) years that would not be possible without this proof?

~~~
sanxiyn
abc conjecture is asymptotic. That is, its statement is of the form "for every
positive epsilon there exists a constant". A proof can be constructive, that
is, an algorithm that can calculate a constant from an epsilon, or it can be
nonconstructive. For any practical application it would seem a constructive
proof is needed.

I am not at all qualified to talk about the claimed proof, but I think it
would be nonconstructive.

~~~
samstave
Thanks, but I said in "lay" ("lag" due to stupid iPhone AC) terms...

I dont really get how your response answers my question.

Can you really dumb it down for me; explain like I am 5 how this will result
in practical applications in the next (N) years.

thanks

~~~
jgeralnik
His answer was that he thinks there will be no practical applications from
this proof, since it merely states that something exists without giving us an
idea of how to find that thing.

~~~
repsilat
Yeah. For the most part non-constructive proofs are only good for proving
things about existing algorithms - their convergence properties, their
correctness, their running time etc. Formal proof often isn't necessary in
practice, though - "seems to work well enough" is ok in many domains.

Of course, this isn't to say that new mathematics based on the result and on
the techniques used to prove it won't lead to new algorithms.

------
lelf
[http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-
universal%20...](http://www.kurims.kyoto-u.ac.jp/~motizuki/Inter-
universal%20Teichmuller%20Theory%20I.pdf) — oh, I need some time to accept
this is not machine generated :)

------
bradleyjoyce
What are the practical applications of such a proof? I'm genuinely curious.

~~~
aswanson
There are probably implications for cryptography.

~~~
biomechanica
I don't know much about cryptography. So, what you're saying is basically that
cryptographic algorithms are largely implemented relying on primes?

This is fascinating.

~~~
csense
> cryptographic algorithms are largely implemented relying on primes

One of the steps in generating an RSA key is, "Randomly generate two large
prime numbers p and q. Multiply p and q, and save the product pq = p*q as part
of your public key."

If anyone can figure out p and q from pq, then they can figure out your
private key.

A while ago there was an article on HN about using the Euclidean algorithm to
break SSL.

If you have two keys pq and rs, and one of the factors is the same (like p =
r), then there's a well-known algorithm dating back thousands of years which
you can use to quickly figure out the common factor's value.

So what the authors of a paper did was gather a couple million public keys
from SSL websites, then run the Euclidean algorithm on each pair of keys, and
many of them had common factors. Why? As noted above, the prime numbers are
supposed to be picked "randomly," but they weren't picked randomly enough.
(Getting a computer to produce random numbers is an interesting problem. The
usual approaches are "faking it" -- technically called pseudorandom numbers --
and using input from hardware that interfaces with the outside world, like the
system clock, keyboard, mouse, network, etc.)

~~~
ColinWright
There's even more at work here. In the context of RSA and DHMW there are bad
primes and good primes. A prime _p_ is potentially bad if _p-1_ has no large
factors. The algebraic structures associated with _pq_ are then combinations
of small algebraic structures, and that can (sometimes) make _pq_ easier to
factor.

So not just any old primes will work, and if two unrelated, unassociated
people use the same software to generate "good" primes, sometimes they will
coincide, leading to the potential for finding their factorizations with GCD
on pairs.

~~~
pbsd
There is little point in concerning oneself with smooth p-1, p+1, Φ_i(p)
values. Randomly-chosen primes will have a negligible chance of having such
properties.

Additionally, the more powerful ECM algorithm can be parametrized to generate
about p different elliptic curves E_{a,b}(p), where p is found whenever
E_{a,b}(p) has a smooth order. Once again, the chance of finding a pair (a,b)
that results in such a smooth order is negligible. This is, in a nutshell, the
result of Rivest and Silverman [1] dispelling the need to jump through many
hoops to generate strong primes.

[1] [http://people.csail.mit.edu/rivest/RivestSilverman-
AreStrong...](http://people.csail.mit.edu/rivest/RivestSilverman-
AreStrongPrimesNeededForRSA.pdf)

~~~
ColinWright
Right, so what you're saying is that when the primes are smaller, as they used
to be able to be, strong (and safe) primes were a good idea because the
earlier methods of factoring could exploit their potential weaknesses.
However, it's now the case that with ECM and GNFS methods of factoring we've
had to move to much, much larger primes, so the weaknesses are no longer
relevant.

Thank you.

------
aliz
Shinichi Mochizuki's homepage is worth a look :)
<http://www.kurims.kyoto-u.ac.jp/~motizuki/top-english.html>

~~~
pav3l
The general message there is "I'm always up in the skies working on my proofs.
No office hours this week"

------
ipince
This guy's thesis has an approachable explanation of the conjecture, as well
as some of the interesting theorems that it implies (including an asymptotic
version of Fermat's Last Theorem):

<http://jeffreypaulwheeler.com/Masters%20Thesis.pdf>

~~~
Devilboy
404

~~~
andrewcooke
there's an apostrophe before the "s" in "Masters" that is stripped out by the
HN code for some reason. either add that or go to
<http://jeffreypaulwheeler.com/> and select the link "The abc Conjecture" near
the start of "Papers Section".

------
Mordor
Is it naive to say these proofs are overly long - that there's something a
little simpler behind them?

~~~
ColinWright
At this stage, yes, it's naive. The ideas and techniques here are pretty much
completely new, and it will take years, if not decades, for people to
understand them deeply enough to find the essential core of what's happening.

No, they are not overly long. It's likely that papers explaining them will
effectively be an order of magnitude (base 10) longer.

------
kzahel
[http://quomodocumque.wordpress.com/2012/09/03/mochizuki-
on-a...](http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/)

[edit - linked the wrong discussion]

~~~
rfurmani
That link is old and was posted a few years before the final paper. The more
recent discussion is here:
[http://quomodocumque.wordpress.com/2012/09/03/mochizuki-
on-a...](http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/)

------
fasteddie31003
If there is some kind of connection between prime numbers, wouldn't a lot of
the current work in encryption be thrown out?

~~~
lutusp
> If there is some kind of connection between prime numbers, wouldn't a lot of
> the current work in encryption be thrown out?

Unlikely. The recent work doesn't improve our ability to factor large
integers, the key to modern cryptography. If someone created a function that
instantly produced the prime factors for large integers without the present
difficult process, that would change everything, but that's not a likely
outcome.

The next real challenge to modern cryptography isn't this work, but quantum
computing. If quantum computing came to full flower with no obstacles sometime
in the future, that would represent a basic change because it would perform an
end run around the present computational difficulties that assure the security
and reliability of our present methods.

~~~
thebigshane
Ok, off topic, sorry, but... Are you saying that if/when quantum computing
comes around, existing encryption systems would be suddenly ineffective? So if
the NSA gets their hands on a quantum computer, they'd be able to brute force
a 1024bit RSA encrypted file in far quicker time than their other super
computers?

I found some info here, but I didn't get enough out of it to answer my
question directly. I think the answer is "Well theoretically, but it's
complicated":

[http://stackoverflow.com/questions/2768807/quantum-
computing...](http://stackoverflow.com/questions/2768807/quantum-computing-
and-encryption-breaking)

<http://en.wikipedia.org/wiki/Quantum_computer#Potential>

<http://en.wikipedia.org/wiki/Shor%27s_algorithm>

<http://en.wikipedia.org/wiki/Post-quantum_cryptography>

    
    
      [...] most currently popular public-key cryptosystems 
      rely on the integer factorization problem or discrete 
      logarithm problem, both of which would be easily solvable 
      on large enough quantum computers using Shor's algorithm. 
      Even though current publicly known experimental quantum 
      computing is nowhere near powerful enough to attack real 
      cryptosystems, many cryptographers are researching new 
      algorithms, in case quantum computing becomes a threat in 
      the future.

~~~
jgeralnik
Yes. The naive algorithm for factorization runs in O(2^N) (where N is the
number of bits in the number you want to factorize), which is impractical for
any sufficiently large N (2^64 > 10^19, 2^1024 > 10^328 which is BIG). There
are better known algorithms, but none of them are polynomial, so a bruteforce
is highly impractical.

Shor's algorithm, on the other hand, runs in O((logN)^3). For an idea of the
difference, log(64)^3 = 216, log(1024)^3 = 1000.

So Shor's algorithm could be used to very quickly factorize large numbers and
would effectively break RSA. We just need a quantum computer, which as far as
we know nobody has yet.

~~~
thebigshane
Ok, thanks. But when I start hearing conspiracy theories that the NSA (or
anyone) has already developed one, I should basically look for an encryption
algorithm that isn't based on prime factorization?

~~~
jgeralnik
Encryption algorithms exist and are being developed which are not based on
prime factorization and for which no known quantum algorithm exists which can
break them in polynomial time.

There will probably be significant warning before quantum computers exist
which can break RSA. In that time it will probably become the norm to no
longer use algorithms which are rendered obsolete by quantum computers.

