Unheralded Mathematician Bridges the Prime Gap 278 points by nature24 1227 days ago | hide | past | web | 94 comments | favorite

 > Without communicating with the field’s experts, Zhang started thinking about the problem. After three years, however, he had made no progress. “I was so tired,” he said.> To take a break, Zhang visited a friend in Colorado last summer. There, on July 3, during a half-hour lull in his friend’s backyard before leaving for a concert, the solution suddenly came to him. “I immediately realized that it would work,” he said.http://c2.com/cgi/wiki?FeynmanAlgorithm`````` Write down the problem. Think real hard. Write down the solution.``````
 This sounds glib on the surface but it's really surprisingly deep. The key is to realize that the problem solving doesn't happen in the second step, but rather in the first. Writing down your problem means you have to structure it in some way, to frame it in a way that at least can be represented on paper. Perhaps it's a formalization, or simply a brief explanation. In any case, step 2 is only possible because of step 1 for most interesting problems. Even for those of us who aren't Feynman.
 My undergrad advisor had a similar story. He was working on an old open problem for his dissertation, and his advisor (quite a well-known figure in the field) kept giving him ideas. Nothing worked. This went on for 4 years.Then one day he decided to stop listening to others, and just work on his own. Six months later, he had a novel solution, closing the problem. His entire dissertation (including title, abstract, etc.) was 22 pages (single sided, double-spaced).
 Been there, done that.It is not as glorious as it sounds. I had some very strong intuitions about some obscure computer science problem (maybe 10 people worked on it worldwide, but I think they gave up already). I was thinking about this problem for at least one year, with some periods of very intensive thinking. I could have really used that time more productively, because that was already quite a busy period for me, but the solution seemed so close.After about one year, I finally had a solution mapped out, which I think would work. However, there were some major implementation problems (memory complexity). That was the time when I was finally able to let it go.So the advise that I would give to people is: focus on real-world problem, that real people have.
 > focus on real-world problem, that real people have.Because the world would be a much better place today if we hadn’t bothered about quantum mechanics, special and general relativity or even advanced mathematics to handle these topics. Sure, this sort of thing won’t solve the problems of this specific time period, but it might well be relevant to a future generation’s problems.
 Quantum mechanics solved a bunch of real world problems that were relevant at the time. As one example, people were trying to predict heat capacities of gases (for things like chemical engineering industrial processes) and the theory they had, based on equipartition, produced results that didn't match experiment at room temperature. And we're not talking a few percent mismatch; we're talking significant differences.Special relativity likewise addressed specific experimental problems people were having.General relativity you may have more of a case for.
 I disagree. We often don't know what the consequences could be of solving abstract problems by enthusiasts. Whole avenues of science were opened this way.
 The issue is that we celebrate the people who follow Feynman's approach and succeed, but forget all those who (like yourself) end up failing. It's a simple matter of survivorship bias. For each person who "thinks really hard" about a particular problem of this magnitude and finds a solution, there are hundreds, if not thousands, who end up failing.
 If we knew ahead of time which tough problems have elegant solutions, it would save us so much time and effort. Alas, that's not the nature of research.
 My point is simply that we should treat Feynman's algorithm as a magic bullet. You have to recognize that it fails a lot more than it succeeds.
 You should remember that the Feynman algorithm is a joke told by one of his colleagues (Murray Gell-Mann) to mock Feynman's story-telling. It has never been intended as a real algorithm that should be applied by anybody.Previously on HN: https://news.ycombinator.com/item?id=1520075
 Is it even an algorithm though? It's so imprecise a rule of thumb that evaluating it as a failure or a success is utterly arbitrary.
 So, there's two (slightly off topic) questions this article raises that I've had for a while. Perhaps someone who's been in academia longer than I have can enlighten me.First, the article talks about experts within the field. I've been a graduate student for about a year now, and within my field of research (molecular dynamics), I still have no idea who the "experts" are. I see the same names frequently pop up on papers I read -- are these the experts? I don't know of any online forums where people discuss MD; if they're out there, then they must be a secret. I'm sure I can't just email these professors and say "hey, want to chat?" So where does this group collaboration and unanimous identification of who the leaders in a field are come from? Conferences are only a few times a year; I can't imagine that's where all of these people meet up and socialize.And secondly, how is this mathematician "virtually unknown", as the article puts it? The University of New Hampshire is surely a well-known institution. I'm sure it ranks top 100-500 in the world, right? And there's maybe 20-25 mathematics professors at each university. I also know math is very diverse, and an expert in topology isn't going to know too much about number theory. So within each "domain" of math, there can surely only be a few hundred in the world at any given time who are actively researching the subject. How can he be so unknown then?
 I am a professional mathematician and number theorist, who has studied work closely related to Zhang's.I had never heard before of an older (40+) mathematician, who has done essentially no meaningful work in the subject before, and has virtually no publication record, coming out of seemingly nowhere and proving such a big theorem.This is in no way a slight against older mathematicians, indeed many spectacular results are proved by people over 40, but they typically accumulate an excellent track record on the way.Indeed, I fully assumed that this guy was full of shit. However, I have heard that well-known experts in the area have closely read Zhang's paper and found it to be correct.This is enough to put a smile on my face. There is something wonderful about skepticism and cynicism being proven wrong, especially when the skepticism is my own.
 The most dramatic example I know of someone who is older and also not a well known who found a major result is Kurt Heegner. He was a 59 year old private tutor and radio engineer with no previous published results when he proved the Class Number 1 problem. His proof was initially rejected by the mathematical establishment and he died before learning that the proof was a major advance and essentially correct.http://en.wikipedia.org/wiki/Stark%E2%80%93Heegner_theoremThis story and Heegner's are particularly interesting because over the years some prominent mathematicians have stated that they believe mathematical creative declines rapidly with age.
 I'm pretty sure that creativity doesn't so much evaporate with age as you're educated out of it (education is largely focused on memorization of work) . Or to quote Picaso:"All children are artists. The problem is how to remain an artist once he grows up."
 > This is enough to put a smile on my face. There is something wonderful about skepticism and cynicism being proven wrong, especially when the skepticism is my own.Mathematicians have always amused me with their weird way of turning their craft upon themselves. There's an astonishing amount of work done (I assume by procrastinating PhD candidates) on the statistics of performance in mathematics. Mathematics is supposed to be such a "pure" science, but this work seems to be motivated by insecurity and a base of other negative emotions. The skepticism it breeds isn't at all useful or beneficial for the field; if anything it's destructive, but yet it seems to persist.Malcolm Gladwell wrote a great piece for The New Yorker [1] on late vs early bloomers in the art world. Gladwell writes that early bloomers are often driven by a sort of internal energy, and since they haven't taken time to refine their process they tend to be more abstract or conceptual. Late bloomers, he suggests, tend to take years or decades honing their craft. They're extreme perfectionists who, instead of working on building a piece, tend to work on refining the skills they need to build a piece.In Gladwell's model for genius, Zhang is obviously the latter. I think as a society we'd benefit from celebrating the successes (and even the failures) of "late-bloomers" like Zhang a lot more. Maybe it would promote the kinds of intrinsic motivation which would encourage more brilliant people to continue their struggle.All of that said, the thing that brings a smile to my face is the fact that Zhang didn't come from one of the MITs, Harvards, Stanfords, or Cornells of the world. He came from a "lowly" state school. Here's hoping he stays there.
 > I think as a society we'd benefit from celebrating the successes (and even the failures) of "late-bloomers" like Zhang a lot more.What do you mean? It seems like Zhang's success is being fully recognized and celebrated, by mathematicians and the public alike. Do you think it deserves still more press than it is getting?> Mathematicians have always amused me with their weird way of turning their craft upon themselves> this work seems to be motivated by insecurity and a base of other negative emotions.On what basis do you make these assertions? How many mathematicians do you personally know?Mathematicians are overall a positive and cheerful group who are positive, welcoming, and as optimistic as circumstances allow. It's one reason I love working in this subject. Our portrayal in the media and Hollywood is, I think, a bit misleading.When the editors of the Annals of Mathematics received Zhang's manuscript, from someone in his fifties and entirely unknown, what was their response? To forward the paper to experts, and ask for an evaluation. And, indeed, the paper was evaluated on its merits.> The skepticism it breeds isn't at all useful or beneficial for the field; if anything it's destructive, but yet it seems to persist.Since I don't understand what you're talking about, could I please ask you to translate your criticism into advice, which I might then consider following?
 Ah, I did not realize he had not really published before. I suppose that is very surprising. Also, the way your post is written seems to suggest that most results out of nowhere are "crankish". Are there really people who spend their time working on these things that aren't professional mathematicians and who submit flawed papers to math journals? I would think that would be an unproductive use of one's time...
 Consider that a lot of problems seems "obvious" to beginners who does not know the full literature on the subject, pretty much regardless of subject.I studied computer science, and during our data structures and algorithms course must have "discovered" a dozen or more algorithms that I'd then find treated or discredited a chapter or two later in our course material.Sometimes someone out of nowhere benefits from not having been explained why a specific idea "can't" work, but much more often they end up wasting their time on it. And some of them go on to think they have valid, significant results.E.g., there are regularly people that are sure they have solutions to problems that are reducible to solving the halting problem, but that don't have the theoretical background to realize it is reducible to the halting problem or why that means their "solution" doesn't work.
 > E.g., there are regularly people that are sure they have > solutions to problems that are reducible to solving the > halting problem, but that don't have the theoretical > background to realize it is reducible to the halting > problem or why that means their "solution" doesn't work.I think you might have gotten this the wrong way: All decidable problems are trivially reducible to the halting problem (just decide them and then map to a halting/non-halting Turing machine). On the other hand, if you can reduce the halting problem to a problem, it means it's undecidable.I would be curious though which "natural" undecidable problems people claim to solve regularly.
 You misunderstand what I wrote. I should perhaps have been more precise, as I can see that "solutions to problems" might be misunderstood. "Problems" in this context was referring to what people might have tried to prove, not the process they were trying to apply it to. The point being that there are plenty of processes where to prove specific outcomes is a problem that can be reformulated as finding a generic algorithm for solving the halting problem.The most common variations are simply obfuscated variations where it is not clear that a sub-part of the process is turing complete, and where that sub-part can determine whether or not a specific state is reached.Quite a lot of software have subsystems that turn out to be Turing complete, whether through the explicit inclusion of scripting support, or through more convoluted means.
 I'm not sure, but I think I did understand what you were saying. I was genuinely interested in undecidable problems people might run into and that are easily seen to be at least as powerful as the halting problem. Thanks for the example!
 > Are there really people who spend their time working on these things that aren't professional mathematicians and who submit flawed papers to math journals?Yes, as well as people who are professional mathematicians (usually not well known) who also do so.There is not a flood of such papers, but my thesis advisor, in his capacity as editor of the Proceedings of the American Mathematical Society, gets at least a dozen or so such papers each year. I refereed some of them, and had to explain to these authors why their proofs were mistaken.
 Attempted proofs of Fermat's Last theorem by cranks used to be so common that Edmund Landau, a German mathematician, had a form letter for them: “Dear Sir/Madam: Your proof of Fermat’s Last Theorem has been received. The first mistake is on page _____, line _____.”I have sometimes seen notes on the pages of prominent mathematicians that they don't have time to examine unsolicited attempts at major problems by amateurs so there must still be a good number. I also have known someone whose sibling is a software engineer and whose hobby is trying to resolve P ? NP.
 I heard a talk by one of the main vetters of the P?NP problem (which itself was underwhelming; clearly presentations were not his strong suit) and he said there were vast numbers of attempted proofs, only a small fraction of which are even treated seriously, and none of which are considered for long enough to be considered "close" to complete. He seemed to believe that the only significant hope for the problem was in the hands of Ketan Mulmuley at UChicago (http://www.cs.uchicago.edu/people/mulmuley), but realistically a solution would not be found in our lifetimes.This is not only interesting for that specific problem, but it was revealing in how many amateurs (including those with doctorates) who are completely over their head at the edge of our knowledge—the people who understand it well enough to evaluate an approach adequately don't submit because they understand the difficulty in ways most don't.
 Back in the 19th century, non-Euclidean geometry was considered the work of the devil and many amateurs and cranks attempted to "prove" the parallel postulate (which had already been proven to be independent of the other postulates).
 > I'm sure I can't just email these professors and say "hey, want to chat?"You can do this, especially if you have an intelligent question, or interesting insight. I have emailed top people in various fields with specific questions and gotten answers. They usually won't want to do a private tutorial to someone with no knowledge of the field, but if there is some ambiguity or possible error in a paper of theirs they pretty much always will respond. I have also on a number of occasions made cold calls on luminaries in fields when I am in their city, and when going to conferences the opportunity to meet and greet are one of the main purposes.I also receive cold calls, visits and emails from people interested in the couple of fields where I am known, and I always respond positively to them. My email responses get more spaced out if there's too much tutorial, but brief yes and no questions, or interesting comments are pretty much always well received. I don't really like getting phone calls, but I have had some pretty interesting conversations with people who managed to find my phone number, most of these are from people overseas.
 > if there is some ambiguity or possible error in a paper of theirs they pretty much always will respondI just want to chime in and also mention that while proper scientists should always enjoy an opportunity for improvement, any proper human always enjoys some acknowledgement of success. I've implemented some cool algorithms I've seen in papers and emailed the authors to 1) Thank them for their research, 2) Show them my implementation , 3) Ask them for their feedback. Their email addresses are generally at the top of the papers, so I figured "what the heck" and the response I've gotten the few times I've done this has been amazing.I'll further extend this to say that you should never be afraid to talk to somebody because of your perception of their status. You'd be amazed at how accessible top tier folks are. Often a key factor in their greatness is their ability to collaborate! But please, be extremely respectful of their time: Keep your messages brief, to the point, and pleasant. Don't fawn or apologize. Don't ask to ask. Don't justify or elaborate. Treat them as a colleague and they will generally treat you as such.
 > I'll further extend this to say that you should never be afraid to talk to somebody because of your perception of their status. ... > Keep your messages brief, to the point, and pleasant. Don't fawn or apologize. Don't ask to ask. Don't justify or elaborate. Treat them as a colleague and they will generally treat you as such.While at school in Norway, we did a project that involved creating a "paper". I chose to do some interviews, and announced I was going to interview some members of parliament, and the CEO of a national TV channel who also happened to be a TV celebrity. Nobody in my class thought I'd get to talk to them because of the "status gap" - as much as a Norwegian MP or CEO is not exactly the top of the international totem-pole, as a 14 year old student the gap was still substantial. It took a little bit of persistance to get past the secretaries / PA's, but that was it.You don't get if you don't ask, and it's not like anything bad will happen, and what I learned was that a lot of these people get contacted with genuine, well targeted requests far less often than most people might think. Rather the fawning "fanboy" type requests are what gets rejected by far the most often, or those who ask for too much commitment / time.
 > I'll further extend this to say that you should never be afraid to talk to somebody because of your perception of their status.I'm quoting this for truth, because it didn't feel emphasized enough. Status is never, never, never a don't-talk-to-me forcefield. Even the goddamn POTUS reads letters from random little people. You're allowed to talk to whoever you please, and they're allowed to say no or decline to answer.
 -I'm sure I can't just email these professors and say "hey, want to chat?"-Why not? I am a barista with a high school dimploma and no interest in higher education, but I have emailed linguists and archaeologists to ask questions and discuss my independent research. In most cases I have heard back within 24 hours, in one, where the gentleman I reached out to had a 32 page CV, had held multiple chairs, and ran 2 institutions, I heard back from him within the hour. In each case I was received with enthusiasm and treated as a peer who just needed some questions answered.It was a revelation for me, and I'm just some guy. I suggest that if you are active in both your field of study and academia, you should absolutely reach out to the experts in your field.As Regina Spektor says, "People are just people, they shouldn't make you nervous."
 I have been sitting for 20 minutes now, trying to come up with an answer to this. Simply, I don't know. For me, I hope that being able to put "Ronin" down in the "institute" field on CFPs and research grants allows my work to be taken seriously. It's hard to corral a movement out of "independents" but Jon Wilkins and the folks at Ronin are devoting themselves to it.I believe it's people like Jon who are giving me the chance to be someday remembered as someone who contributed to the sum total of human knowledge, rather than just some guy who knew a lot about locks.
 Ugh, that's awful. I have on occasion been made aware how atypical my educational experiences have been. My secondary education was filled with teachers who made it very clear that if I wanted to know something, it was always safe to ask, and that I would be rewarded for my intellectual curiosity.I've only had one memorable explicitly negative experience where I was openly mocked for asking about something outside of my experience, but my indignation far exceeded my embarrassment in that incident.I keep a text file on my laptop named "youcantalktoanyone.txt" with the contact info for a few people who's work in interested in. One by one, as I have something to contribute, or need clarification on their work, I get in touch. I forget sometimes how lucky I've been.
 "YMMV and some academics have no interest in spreading knowledge at all. Fuck them."This is true in many other areas of life. I found there's plenty of old farts in amateur radio and other hobbies, you just have to find the ones that are interested in mentoring and ignore the assholes that shit on everyone.
 Speaking to your first question: "Molecular dynamics" is too broad. Can you name the dominant players working on the specific molecules / processes where you publish?For example, I have my PhD in EE / Robotics. I am an expert in (long-range) UHF RFID for mobile robots. There are probably only 5 other dominant researchers in this space. There are a lot of similar, related fields (eg. UHF sensors, power harvesting, indoor channel modeling, Radar sensing, etc) and I've co-published in some of these too, but...Ultimately, you'll become an expert in a very (very!) narrow topic. You'll probably know all of the dominant players by first name. You may go for beers with them at conferences (as we do). I recommend checking out Matt Might's "Illustrated Guide to a PhD": http://matt.might.net/articles/phd-school-in-pictures/
 I work in an area tangential to MD; I would say that a list of 'experts' would include: DE Shaw, Benoit Roux, Vijay Pande, David Baker (and associated people in their labs).You say: I'm sure I can't just email these professors and say "hey, want to chat?"I am not yet a graduate student and I do exactly that relatively frequently. It has very often led to great talks about science. You should be informed and knowledgable about the work you want to discuss (and be specific with your queries), and I think you'll find that most academics are more than happy to talk about their own work. You can learn a lot about a subfield this way.
 "I still have no idea who the "experts" are. I see the same names frequently pop up on papers I read -- are these the experts?"Yes, they are the people who are saying stuff that everyone else is quoting. In the context of your question they are "the experts." All it takes to join their ranks is to get your citation rate up :-).
 And just to make sure one is not at some local optimum, it's not so much the names on the front page.Read papers the way experts do: from the biblio!
 > And there's maybe 20-25 mathematics professors at each university.Yitang Zhang is not a professor, he is a lecturer [1][2]. A quick seach on Google Scholar reveals that people with that name have published extremely few papers in mathematics. He doesn't seem to have a website of his own, especially one that lists his publications. I'd find it quite believable that he is relatively unknown in his field.
 Yitang's PhD is in a completely different field of math: algebraic geometry. Number theory is his hobby.This is a very inspiring story!
 "Completely different" is a bit of an exaggeration. Some number theorists are really algebraic geometers.But yes, there are people who switch away from their thesis areas. Barry Mazur, if the story is correct, wrote an awesomely short thesis on topology. But he's famous as a number theorist.
 The old saw is that when you finish your PhD you will either be sick to death of the subject and do something quite different, or still interested and spend most of your career on the same subject. It does seem to be quite binary in practice.
 > I see the same names frequently pop up on papers I read -- are these the experts?Not always, but often, yes, the names you see frequently are people leading the field, or people who have been around much longer and have acquired more background knowledge than most. When you go to conferences, it's obvious who the leaders of the field are - sometimes they're the ones generally speaking most sense, or the ones quietly sat at the back and chiming in with insights from time to time.> So where does this group collaboration and unanimous identification of who the leaders in a field are come from?Conferences. Yes, that is where people meet, socialise, network, and then subsequently collaborate. Usually there are a few large conferences in a field, and then smaller groups breaking off from that to workshops, networking meets etc. When you make contact with someone at a conference, stay in touch, visit them to give a talk or host them, and relationships build from there. It takes time.
 @ your second question, the article notes that "Without communicating with the field’s experts, Zhang started thinking about the problem", so maybe he just worked on it in his free time? Not that it means he hasn't published much, but there aren't many Yitang Zhang papers on arXiv either. As far as I know he has no website, and according to UNH he is a "lecturer" rather than a professor. I'm not sure how the academic ranks work in the USA but I believe that, typically, lecturers are of lower "rank" than professors. So in that case, Zhang would have been effectively less renowned, hence "unknown", but obviously no less able.
 A lecturer is the bottom of the heap. It's the lowest title you can have and teach in a university. Tenured professor > tenure track professor > visiting assistant professor > adjunct / lecturer. It means he's paid per course and isn't guaranteed any classes beyond the current semester.
 Is there any widespread stigma surrounding the lower-ranks (academic/intellectual ability, etc), or is the system just widely acknowledged as simply some sort of salary structure?
 Both. If you're any good, you're supposed to magically move up the escalator. It's a career thing like any other.
 Lecturers sometimes don't have PhD's, tend not to do research, and get stuck with all the courses that the tenured profs don't want to teach (and don't get paid much better than their TA's to teach the course).
 Talking to other people in the field is a great way to find influential work. You can only read so many papers, but by talking to many other people, you expand your accessible information. Other people can mention that this paper is great or that they've seen a series of great papers from a particular lab, etc and point out connections that you might miss. It's not just the frequency of names that matters though, it's the quality of work? Are they doing revolutionary or deeply insightful work or are they doing lots of good work, but that isn't _great_ work. (Yes, the comparison is a little vague--leaving that open to interpretation intentionally).
 If you don't publish or at the very least attend conferences in academia I'd say you would be labeled unknown.
 > I'm sure I can't just email these professors and say "hey, want to chat?"I've done just that on a number of occasions and more often than not the answer is "yes, sure".As long as the chat is interesting enough.If you start off without asking you at least save yourself the rejection. But you will also save yourself acceptance.
 Using your numbers, he'd be one out of thousands or tens of thousands of math professors at well known universities. But it's worse because he has never published before and he's only a lecturer.Needle in a haystack.
 This is just utterly inspiring and something I've always dreamed about happening to me. Too often, people are proud of themselves after figuring out something that took a few days or even a few weeks to come up with. To stumble upon a solution when you had all but given up hope after three YEARS is just astounding and must amazing.Congratulations to this mathematician. Inspiring. Truly inspiring.
 three YEARSPersistence, man, persistence.The chinese have some saying that, loosely translated, "ten years is short to revenge a father's death."
 君子报仇十年不晚Also this:10 minutes on stage is 10 years practice behind stage.
 Neato, thanks!http://en.wiktionary.org/wiki/%E5%90%9B%E5%AD%90%E6%8A%A5%E4..."A nobleman bides his time for the perfect moment for revenge."I swear I didn't make up the oedipalia! A chinese native mentioned it in passing years ago and the phrase stuck.
 This is simply amazing. Stories like this are humbling, inspiring and provide a renewed appreciation for persistence.Also, with respect to being unknown, the one thing that does pop up upon doing a search is his rating on ratemyprofessor: http://www.ratemyprofessors.com/ShowRatings.jsp?tid=56169.
 The common refrains seem to be 1) easy tests 2) seems nice in class and 3) is mean one in one.I wonder if 2 & 3 have anything to do with his shyness (as mentioned in the article) and his lack thereof when he gets to talking about math? Maybe he gets on a roll as far as math goes, but one-on-one interactions are more difficult?Or maybe he was busy coming up with this bad-ass paper and didn't want to extend his office hours?Either/or.
 The Twin Prime conjecture is probably the most elementary unsolved mystery in math.
 Is the bound achieved by his proof actually 70000000? That looks like a suspiciously round number. Why can't they tell us the exact bound that he achieved? (Presumably some slightly smaller number?)
 In math you often don't need super precise bounds. I think it just happens that 70m works. If you try harder you can probably find something more accurate. IMO even a bound like 100m is good because it's a finite number.
 No, actually any proof that supplies a bound must also have some least bound that can be derived from that proof. It could be a calculation that starts from the bound, or it could be some calculation with a result that tells you what the bound is. To me it seems very unlikely that the actual bound was exactly 70000000 in either case.It _could_ be that some super-expensive calculation is required for each candidate bound, and Zhang guessed 70000000 as a starting point and did the calculation for that number, which succeeded, but he didn't bother repeating it for any smaller numbers because it would cost him too much. But if that was the case, that would be interesting in itself.
 Also reported six days ago, although with comparatively little discussion:
 > Among large numbers, the expected gap between prime numbers is approximately 2.3 times the number of digits; so, for example, among 100-digit numbers, the expected gap between primes is about 230.> no matter how sparse the primes become — you will keep finding prime pairs that differ by less than 70 million.Does this mean even for numbers longer than 70m/2.3 = 31m digits, there is bound to be at least one prime every 70m numbers or so?
 No, it doesn't mean there is a prime EVERY 70m or so. It means that as you go on examining pairs of consecutive primes, you will keep finding more and more where the gap is 70m or less. You will find many more where the gap is more than 70m. The average gap will be increasing. The average gap at N is approximately log(N). (That's the natural logarithm, not the common logarithm).That's where that 2.3 you quoted comes from. If you are looking at numbers with d decimal digits, they are around 10^d and so the average gap is approximately the log of that, log(10^d) is d log(10), and log(10) is approximately 2.3. Hence, the average gap for d digit numbers is approximately 2.3 x d.
 Side note: Most people represent the natural logarithm in ascii with ln(). Makes it easier to read for people expecting log(10) to be 1...
 By most people you mean computer people. In math "natural log" is generally just written log().
 No, there can be huge gaps with no primes. after n!(n factorial), it is easy to show there are no primes until at least n!+n+1.What this proof shows is you never get to a point where the gap is always bigger than 70 million, forevermore. This is a big step towards the "twin primes conjecture" which claims there are an infinite number of pairs of primes only 2 apart.
 Of particular significance:since the gap never gets always-bigger than 70 million, and there are an infinite number of gaps between primes, by something resembling the pigeonhole principle it can be shown that there are an infinite number of prime pairs for at least one specific gap size between 2 and 70 million. That is, there might not necessarily be an infinite number of pairs of primes exactly 2 apart, but there are an infinite number of primes exactly N apart for some N<70,000,000.
 The "something resembling the pigeonhole principle" is the fact that if the union of finitely many sets is infinite, then at least one of the sets must be infinite as well.I don't know whether it has a name.
 I think it's usually just called the pigeonhole principle. It's a special case of the Infinite Ramsey Theorem.
 Perhaps. But it's simpler than all that.Assume that 70 million disjoint sets all have finite cardinality. Then the cardinality of the union is the sum of the cardinalities, and is finite by construction.Contradiction; QED.
 I don't get conjectures. Does it mean they are widely believed to be true, just unproven? If so, why, where does this confidence come from? If not, what makes them interesting in the first place?
 The confidence comes from two things, really:1. The conjecture has been found to hold for a large number of trial cases, and2. The conjecture has an intuitive aura of rightness.A good example is the 3n + 1 conjecture: http://en.wikipedia.org/wiki/Collatz_conjectureAnyway, obviously point two is extremely subjective, but I think it's the crux of why long-standing conjectures are interesting. Why should something feel correct without necessarily being so? If it does turn out to be correct, what about its nature put people on its scent trail? Mathematics so often seems to be a kind of hermetic universe unto itself, so part of the beauty of conjectures is that they're kind of a contact point between that universe and human congnition.
 Where's the intuitive arena of rightness for the Collatz conjecture? I would put an 'or' between #1 and #2 (with #1 being less preferred; why would anybody start checking whether P holds for a large number of trial cases without having a hint of #2? (How did Collatz came up with this conjecture?)), and add a #3: the conjecture isn't easily disproven ('easily' being subjective, but the other two are subjective, too)
 While working on some problem or just playing around (with numbers) you could discover something by accident. Noticing the Collatz conjecture for only two numbers is probably enough to cause some curiosity and then you try some more examples and not much later you have a conjecture without any idea why it could be true.
 A conjecture is something that a mathematician thinks is probably true, but has not been able to prove nor disprove. There are some widely-known conjectures, but most are just something a mathematician ran into while thinking about some other problem and then ran into a dead end while trying to prove/disprove it.Often, a conjecture will be invented and then proved or disproved a short time later (I remember my wife disproving one her thesis adviser had proposed in class earlier that day), but occasionally one will remain unproven either way for a long time. With many of the "big" conjectures, this gives people confidence they're true -- they've been tested for millions, billions, or even trillions of cases over the course of decades or centuries, but nobody has found a counterexample. But they also haven't come up with an airtight proof; maybe they'd fail on the ten trillionth try, or maybe they'd hold true forever. That uncertainty is part of what makes them interesting.
 The definition of conjecture says they are basically 'guesswork': http://www.thefreedictionary.com/conjecture. Someone will say 'I think that there are no values of x,y and z can solve x^n+y^n=z^n for n other than 2' but doesn't prove it. This can be due to lack of time, space, or a supreme insight on the part of the conjecturer[1]. The conjecturer has said some important and correct things in the past, so many people will take this experience and look at the conjecture and say 'this is probably right because this guy was right before and we see no errors with this conjecture so we will use it in as a theorem', which is to say they will use it without proof because it makes sense.Then, sometimes 100's of years later, somebody does prove (or disproves) the theorem and get's some press, and for some very important conjectures, prizes. If the conjecture is disproven, then all the math based on it falls down.Conjectures interesting because they are puzzles left by people that they can't answer but they want them to be true, so it is up to future generations with their advanced minds, tools and insights to prove them, leading to progress in the field with the advancements of techniques use to prove these conjectures and make them actual theorems.
 Not all guesswork is dignified by the term conjecture though.Part of math school is to learn making and breaking guesses about things mathematical.As noted elsewhere ITT, a conjecture becomes one when there's a community-acknowledged aura of moral rightness about it. So at the very least, a bunch of people have tried to disprove it and failed.
 I guess the confidence comes from having tried it with a bunch of numbers. I'm sure there are people that have written programs that just tried to show the identity holds for all numbers up to n with n very large.Of course the problem is you can't show this by exhaustion. And you also can't currently prove it, hence why its a conjecture. But it's interesting because it means you might be missing some integral property that would prove the conjecture.
 No; trivially there will be stretches of length >70m where there are no primes, and this is true for any length of "desert" you care to specify (e.g. the sequence of 70m numbers starting at (70m+1)! are all composite; so is the one starting at (70m+2)!)).What he's proven is that there's never a point after which every prime is more than 70m away from the last one. Even as the average distance between one prime and the next increases towards infinity, you'll always find occasional pairs that are within 70m of each other; they'll get rarer and rarer as the numbers get larger, but you'll never get to the last such pair.
 No. The average size of a prime gap around numbers of size N is ln(N).http://en.wikipedia.org/wiki/Prime_number_theoremA rough analogy of this prime gaps work is the following. Imagine that you throw infinitely many darts at a dartboard. Before you throw, draw a bullseye around the center, as small as you want. Then you will miss plenty often, but infinitely many of the darts will hit the bullseye.
 Edit: Admittedly only skimmed the article, let alone the paper. Obviously there are an infinite number of primes and we've proved that ages ago. Do not read this comment, read the replies if you want a more accurate answer. Leaving it here for posterity.Yup, at least according to the article:"His paper shows that there is some number N smaller than 70 million such that there are infinitely many pairs of primes that differ by N. [...] [N]o matter how sparse the primes become — you will keep finding prime pairs that differ by less than 70 million."So he proved that it there is some number N, not necessarily 70mil but below 70mil, is the "gap" between primes.--------This is an amazing development. On the other hand, we are also getting close to nailing the odd Goldbach Conjecture. I think it was just last year that Tao proved, without the Riemann Hypothesis, that every odd number N > 1 can be expressed at by the sum at most 5 primes. Wonderful to witness such great leaps in maths during one's own lifetime.
 No, this isn't right.He showed that there are infinitely many twin primes differing by 70 million from each. However, it could be that the next such twin prime, has its lower prime number more than 70million numbers further along.What I mean is, there is allowed to be a gap larger than 70million with no primes in it, then all of a sudden, twin primes. But those twin primes with a gap less than 70million between them are guaranteed.Basically, if you separate all adjacent pairs of primes, with no primes in between, you can separate this into two groups of those adjacent pairs with a gap less or equal to 70mil, or greater than 70mil. The first group has infinitely many members according to the proof. And the second group is probably not empty.
 And the second group is probably not empty.We know for sure it's non-empty by simple factorial arguments ITT or by appeal to the prime number theorem.
 No, that’s not what the paper shows. The paper claims that there exists arbitrarily large primes prime pairs with a gap of less than 70 million, not that all primes are part of a prime pair. Your misinterpretation is equivalent to intepreting the twin prime conjecture to mean that all odd numbers are prime.
 Harald Helfgott would have a bone to pick with "getting close". :)
 Not bad for the first half of 2013!
 One way of telling the story is this: as we keep traversing ever larger numbers, we can always find a pair of primes whose distance is less than 70 million.The twin primes conjecture holy grail is this: no matter how big we go, we can always find a pair of primes whose distance is the minimal possible, i.e. 2.So as sparse as these pairs become in the limit according to the prime number theorem, the likelihood of finding them remains always positive.
 Wondering if this impacts the security of public key encryption.
 As part of properly generating RSA keys there are restrictions as to how close together the two primes numbers chosen can be. So no, twin primes bear no relevance to the problem.

Search: