Hacker News new | comments | show | ask | jobs | submit login

> Among large numbers, the expected gap between prime numbers is approximately 2.3 times the number of digits; so, for example, among 100-digit numbers, the expected gap between primes is about 230.

> no matter how sparse the primes become — you will keep finding prime pairs that differ by less than 70 million.

Does this mean even for numbers longer than 70m/2.3 = 31m digits, there is bound to be at least one prime every 70m numbers or so?

No, it doesn't mean there is a prime EVERY 70m or so. It means that as you go on examining pairs of consecutive primes, you will keep finding more and more where the gap is 70m or less. You will find many more where the gap is more than 70m. The average gap will be increasing. The average gap at N is approximately log(N). (That's the natural logarithm, not the common logarithm).

That's where that 2.3 you quoted comes from. If you are looking at numbers with d decimal digits, they are around 10^d and so the average gap is approximately the log of that, log(10^d) is d log(10), and log(10) is approximately 2.3. Hence, the average gap for d digit numbers is approximately 2.3 x d.

Side note: Most people represent the natural logarithm in ascii with ln(). Makes it easier to read for people expecting log(10) to be 1...

By most people you mean computer people. In math "natural log" is generally just written log().

No, there can be huge gaps with no primes. after n!(n factorial), it is easy to show there are no primes until at least n!+n+1.

What this proof shows is you never get to a point where the gap is always bigger than 70 million, forevermore. This is a big step towards the "twin primes conjecture" which claims there are an infinite number of pairs of primes only 2 apart.

Of particular significance:

since the gap never gets always-bigger than 70 million, and there are an infinite number of gaps between primes, by something resembling the pigeonhole principle it can be shown that there are an infinite number of prime pairs for at least one specific gap size between 2 and 70 million. That is, there might not necessarily be an infinite number of pairs of primes exactly 2 apart, but there are an infinite number of primes exactly N apart for some N<70,000,000.

The "something resembling the pigeonhole principle" is the fact that if the union of finitely many sets is infinite, then at least one of the sets must be infinite as well.

I don't know whether it has a name.

I think it's usually just called the pigeonhole principle. It's a special case of the Infinite Ramsey Theorem.

Perhaps. But it's simpler than all that.

Assume that 70 million disjoint sets all have finite cardinality. Then the cardinality of the union is the sum of the cardinalities, and is finite by construction.

Contradiction; QED.

I don't get conjectures. Does it mean they are widely believed to be true, just unproven? If so, why, where does this confidence come from? If not, what makes them interesting in the first place?

The confidence comes from two things, really:

1. The conjecture has been found to hold for a large number of trial cases, and

2. The conjecture has an intuitive aura of rightness.

A good example is the 3n + 1 conjecture: http://en.wikipedia.org/wiki/Collatz_conjecture

Anyway, obviously point two is extremely subjective, but I think it's the crux of why long-standing conjectures are interesting. Why should something feel correct without necessarily being so? If it does turn out to be correct, what about its nature put people on its scent trail? Mathematics so often seems to be a kind of hermetic universe unto itself, so part of the beauty of conjectures is that they're kind of a contact point between that universe and human congnition.

Where's the intuitive arena of rightness for the Collatz conjecture? I would put an 'or' between #1 and #2 (with #1 being less preferred; why would anybody start checking whether P holds for a large number of trial cases without having a hint of #2? (How did Collatz came up with this conjecture?)), and add a #3: the conjecture isn't easily disproven ('easily' being subjective, but the other two are subjective, too)

While working on some problem or just playing around (with numbers) you could discover something by accident. Noticing the Collatz conjecture for only two numbers is probably enough to cause some curiosity and then you try some more examples and not much later you have a conjecture without any idea why it could be true.

A conjecture is something that a mathematician thinks is probably true, but has not been able to prove nor disprove. There are some widely-known conjectures, but most are just something a mathematician ran into while thinking about some other problem and then ran into a dead end while trying to prove/disprove it.

Often, a conjecture will be invented and then proved or disproved a short time later (I remember my wife disproving one her thesis adviser had proposed in class earlier that day), but occasionally one will remain unproven either way for a long time. With many of the "big" conjectures, this gives people confidence they're true -- they've been tested for millions, billions, or even trillions of cases over the course of decades or centuries, but nobody has found a counterexample. But they also haven't come up with an airtight proof; maybe they'd fail on the ten trillionth try, or maybe they'd hold true forever. That uncertainty is part of what makes them interesting.

The definition of conjecture says they are basically 'guesswork': http://www.thefreedictionary.com/conjecture. Someone will say 'I think that there are no values of x,y and z can solve x^n+y^n=z^n for n other than 2' but doesn't prove it. This can be due to lack of time, space, or a supreme insight on the part of the conjecturer[1]. The conjecturer has said some important and correct things in the past, so many people will take this experience and look at the conjecture and say 'this is probably right because this guy was right before and we see no errors with this conjecture so we will use it in as a theorem', which is to say they will use it without proof because it makes sense.

Then, sometimes 100's of years later, somebody does prove (or disproves) the theorem and get's some press, and for some very important conjectures, prizes. If the conjecture is disproven, then all the math based on it falls down.

Conjectures interesting because they are puzzles left by people that they can't answer but they want them to be true, so it is up to future generations with their advanced minds, tools and insights to prove them, leading to progress in the field with the advancements of techniques use to prove these conjectures and make them actual theorems.

[1] http://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem

Not all guesswork is dignified by the term conjecture though.

Part of math school is to learn making and breaking guesses about things mathematical.

As noted elsewhere ITT, a conjecture becomes one when there's a community-acknowledged aura of moral rightness about it. So at the very least, a bunch of people have tried to disprove it and failed.

I guess the confidence comes from having tried it with a bunch of numbers. I'm sure there are people that have written programs that just tried to show the identity holds for all numbers up to n with n very large.

Of course the problem is you can't show this by exhaustion. And you also can't currently prove it, hence why its a conjecture. But it's interesting because it means you might be missing some integral property that would prove the conjecture.

No; trivially there will be stretches of length >70m where there are no primes, and this is true for any length of "desert" you care to specify (e.g. the sequence of 70m numbers starting at (70m+1)! are all composite; so is the one starting at (70m+2)!)).

What he's proven is that there's never a point after which every prime is more than 70m away from the last one. Even as the average distance between one prime and the next increases towards infinity, you'll always find occasional pairs that are within 70m of each other; they'll get rarer and rarer as the numbers get larger, but you'll never get to the last such pair.

No. The average size of a prime gap around numbers of size N is ln(N).


A rough analogy of this prime gaps work is the following. Imagine that you throw infinitely many darts at a dartboard. Before you throw, draw a bullseye around the center, as small as you want. Then you will miss plenty often, but infinitely many of the darts will hit the bullseye.

Edit: Admittedly only skimmed the article, let alone the paper. Obviously there are an infinite number of primes and we've proved that ages ago. Do not read this comment, read the replies if you want a more accurate answer. Leaving it here for posterity.

Yup, at least according to the article:

"His paper shows that there is some number N smaller than 70 million such that there are infinitely many pairs of primes that differ by N. [...] [N]o matter how sparse the primes become — you will keep finding prime pairs that differ by less than 70 million."

So he proved that it there is some number N, not necessarily 70mil but below 70mil, is the "gap" between primes.


This is an amazing development. On the other hand, we are also getting close to nailing the odd Goldbach Conjecture. I think it was just last year that Tao proved, without the Riemann Hypothesis, that every odd number N > 1 can be expressed at by the sum at most 5 primes. Wonderful to witness such great leaps in maths during one's own lifetime.

No, this isn't right.

He showed that there are infinitely many twin primes differing by 70 million from each. However, it could be that the next such twin prime, has its lower prime number more than 70million numbers further along.

What I mean is, there is allowed to be a gap larger than 70million with no primes in it, then all of a sudden, twin primes. But those twin primes with a gap less than 70million between them are guaranteed.

Basically, if you separate all adjacent pairs of primes, with no primes in between, you can separate this into two groups of those adjacent pairs with a gap less or equal to 70mil, or greater than 70mil. The first group has infinitely many members according to the proof. And the second group is probably not empty.

And the second group is probably not empty.

We know for sure it's non-empty by simple factorial arguments ITT or by appeal to the prime number theorem.

No, that’s not what the paper shows. The paper claims that there exists arbitrarily large primes prime pairs with a gap of less than 70 million, not that all primes are part of a prime pair. Your misinterpretation is equivalent to intepreting the twin prime conjecture to mean that all odd numbers are prime.

Harald Helfgott would have a bone to pick with "getting close". :)


Not bad for the first half of 2013!

One way of telling the story is this: as we keep traversing ever larger numbers, we can always find a pair of primes whose distance is less than 70 million.

The twin primes conjecture holy grail is this: no matter how big we go, we can always find a pair of primes whose distance is the minimal possible, i.e. 2.

So as sparse as these pairs become in the limit according to the prime number theorem, the likelihood of finding them remains always positive.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact