Hacker News new | past | comments | ask | show | jobs | submit login
Mathematicians clear hurdle in quest to decode primes (quantamagazine.org)
251 points by akeck 12 days ago | hide | past | favorite | 73 comments





Two inaccuracies in the article. For purposes of simplicity, the author writes "But the Lindelöf hypothesis says that as the inputs get larger, the size of the output is actually always bounded at 1% as many digits as the input." But this is a case where simplification goes too far. What Lindelof says is that the size of the output is always bounded by ε% as many digits as the input, for ANY (arbitrarily miniscule) ε > 0.

Second, the subtitle "Paul Nelson has solved the subconvexity problem..." is strange. The subconvexity problem, for a given L-function, is to lower the percentage described above from 25% to "any positive number"; in other words, to bridge the gap between the convexity bound (which is "trivial") and Lindelof (a consequence of GRH). The only way that the statement "Paul Nelson has solved THE subconvexity problem..." could maybe be accurate is if Nelson proved the Lindelof hypothesis for "all" L-functions. Which is far from the case. (What makes subconvexity so interesting, as Nelson says in the article, is that it is a problem where you can make partial progress towards a particularly important consequence of GRH. And Nelson's result is exactly that: partial progress.) More accurate would be "Paul Nelson has made significant progress on the general subconvexity problem."


These slips from Quanta are puzzling. Quanta normally seems to be more meticulous about accuracy, and the author is a veteran.

The attraction of Quanta has always been that its writers seemed to have an intrinsic passion for math, treating it as more than just a gee-whiz topic to gloss over in between the latest panicked missives about politics and how cellphones are destroying our children. One of its most delightful qualities has been the consistent willingness to take a few extra sentences to explain even advanced concepts in a manner that is basically technically correct.

Certainly the article remains far better than the average mass media STEM writing, but Quanta should take greater care to keep their quality pristine. They have been utterly unique and have set the example for everyone else.


It's not entirely fair to call the second point a major slip. I think that it is still sort of inaccurate, but not exactly for the reason I had said above; see the thread following kevinventullo's comment below. In any case, my second point is sort of pedantic. I don't love the terminology used, but I wouldn't heap very much blame on the author.

The first one is a bit worse. I think if I had read this knowing nothing about convexity I would have gotten the wrong idea from the arbitrary choice of 1%. I understand the desire to simplify, but it is an art to simplify while keeping what you say technically correct. Quanta usually does an excellent job of this. I wouldn't say that the first point above is an egregious error by any means, but I think it is a slip.


Ian Petrow defines the Subconvexity Problem as "Prove non-trivial upper bounds for L-functions on the critical line" [1]

Given that, it seems fair-ish to say that Nelson solved the Subconvexity Problem. You just have to understand that the problem is really a family of problems of increasing hardness (e.g. prove tighter and tighter bounds), and solutions more powerful than Nelson's may come later.

[1] https://www.ucl.ac.uk/~ucahpet/SubconvexityIntro.pdf


By that definition, I agree, if you add the qualifier "Nelson solved the subconvexity problem for for automorphic L-functions."

I still think it's vague to talk about "the" subconvexity problem without specifying what variable you want the bound to be subconvex in, but, really, who am I to argue with Ian Petrow..!


Over the past few years, Quanta has become my go-to place for learning about current science developments. A decade or more Scientific American started focusing on their political agenda. And in the last year or two, American Scientist has been completely taken over by wokeness. I haven't found a better venue for pleasure-type learning about science than Quanta.

My thoughts exactly.

To be fair, I think there are economic conditions that drive the unfortunate situation. Quanta is very lucky to have enormous free support from a billionaire-funded foundation (Simons Foundation).

When publications become desperate for cash, they dive into the politics and culture war rathole.


I’m confused by your second paragraph. The article describes the subconvexity problem for a given L-function as lowering the bound from 25% to x% for some x < 25. That is, 25% is the “convexity bound” and the existence of such an x is a “subconvexity bound.”

My understanding is that Nelson has indeed done this for every L-function, although the exact x varies with the L-function. How is that not solving the subconvexity problem?


A subconvex bound is any bound that beats the convexity bound of 25%. If you would define "the subconvexity problem" as beating the 25% bound AT ALL (or in other words providing any subconvex bound), then the statement in the subtitle would be closer to accurate.

But this is not (I think) how most people would speak, and I wouldn't say this. I would say the subconvexity problem is the problem of bridging the gap between 25% (convexity) and 0% (Lindelof). Subconvexity bounds for various L-functions (zeta, Dirichlet L-functions, and higher rank L-functions) are a very active area of research, and so it seems strange to me to say "the subconvexity problem is solved."

"My understanding is that Nelson has indeed done this for every L-function." One minor note. Nelson's work applies if the L-function is automorphic. By the Langlands philosophy, this should be true for any reasonable L-function, but this is far from known, even in, e.g., the simple case of the L-function corresponding to the tensor product of two automorphic L-functions.

Edit: I am wrong about the statement "this is not (I think) how most people would speak." It looks like beating the convexity bound at all is often described as "solving the subconvexity problem," e.g. in the introduction here https://link.springer.com/article/10.1007/s002220200223. This description is a bit strange to me, but if it is standard then it is unfair for me to call it an "inaccuracy," persay; thanks for pointing this out.


Friendly note: per se is a Latin phrase with non-obvious spelling.

As a further addendum, there are different "aspects" in which you could want the bound to be subconvex. As a result, even for a given family of L-functions) there is more than one "subconvexity problem." In this case, Nelson's bound is not subconvex in the non-Archimedean aspect. This is another reason why it doesn't seem quite right to me to say without qualification that "the subconvexity problem has been solved."

As you say, we now know that each standard L-function satisfies a subconvex estimate as its argument varies. This falls short of "solving the subconvexity problem" in two respects.

The first, pointed out already by gavagai691, is that it is not known that "every L-function" is "standard" (this is a major open problem in the Langlands program).

The second is that the general "subconvexity problem" asks for a nontrivial estimate not only for each fixed L-function as its argument varies, but more generally as the L-function itself varies. The general case of the problem remains open.


> What Lindelof says is that the size of the output is always bounded by ε% as many digits as the input, for ANY (arbitrarily miniscule) ε > 0.

Is there are reason or benefit of stating it like that rather than saying the fraction of output and input digit size asymptotically approaches zero? Or did I misunderstand the explanation?


Thinking about digit size is sort of fine for simplification / not turning off general audiences with equations, but a bit of a pain to think about if , e.g., the things you're comparing might be smaller than one.

What Lindelof says more precisely is that for any ε > 0 and any L-function L(s), there is a constant C_(ε,L) (depending only on ε and L, but crucially not on s) such that |L(s)| <= C_ε (1+|Im(s)|)^ε.


There may be some linguistic benefits to the "epsilon" formulation when discussing subconvexity rather than Lindelof. For instance, I think "the output is eventually bounded by 24% of the input" sounds more natural than "the limsup of output/input approaches something less than 24%".

I remember as a teen when my parents got me a pc (they could Ill afford) as an upgrade from the, by then 10 years old, hand-me-down ZX Spectrum.

I was in awe at how many primes I could find on my cyrix 333Mhz on my own noddy code

Left it running all day when at school.

Never found any thing of value but I learned how to Beowulf cluster and learned a lot about arbitrary integer implementation

Simpler times


When I was a sophomore in high school, in 1972, our school started a "computer class" which was comprised of a teletype in a corner of a former storage room. Me and my best friend were the only ones to sign up for it. The teacher was the math teacher and he had a manual from the local community college he tossed to us and basically turned us loose on it. We logged into their Univac. The first thing we did was find the square root of a number using the Newtonian approximation. The second thing we did is find the 1st 100 primes which took forever :)

Awesome your parents got you a PC, we would have died for one if it existed then!


For me it was learning Fortran IV (after Focal then Basic - PDP-8i) because my dad let me use his account on the uni CDC 6400 - basically built basic bigint math library that I then put to use to determine if 1000! + 1 was prime :-)

And?It was prime?

From memory, "I don't recall" - it was 50yrs ago :-) Though wolfram alpha tells me "nope,it ain't"

Though I do have some recollection of hitting account limits on compute time on the CDC machine. My dad never mentioned any overages though. He was working in CDC assembler building a simulator for something with the odd name (at the time) of a "microcontroller". It was a profs project that needed an emulator so that was his term project.


Two minutes with libgmp (most of that time trying to remember the function names) tells me that 1000!+1 is divisible by 6563

Hah! Reminds me of myself as a youth, but I was trying to find people primes with an utterly horrible Perl script. I wish I still hade my early code from my salad days.

Is finding primes of all things really the first thing you did with a computer as a teen? You could have invented Facebook or Napster

Finding primes is a hugely important problem. I’d argue much more important for the world than sharing what you ate for breakfast or music.

I invented Napster but then Justin Timberlake stole my idea.

I know.

I'm fairly sure i did develop them but just deleted as i thought nobody would want that


In fact, nobody does want Facebook, so you were right

I actually built in the 90s a site that was kinda like reddit but just for BBC news articles (they only started comments a few years ago) and a comedy/satire news site ala the onion.

in retrospect teenage me should prob have told other people about my websites


The first thing I'd do today is a spam filter :-)

Young enough for a Fields? There is a ceremony later this year, but it's probably too soon. So he could be awarded a Fields in 2026 if he's not yet 40. He received his PhD in 2011.

Also, according to MathGenealogy, his grand-grand-grand-grand-grand-grand-grand-grand thesis advisor was none other than Poisson himself. Poisson's advisers were Lagrange and Laplace. Lagrange's was Euler. Euler's was Bernoulli. Bernoulli's was another Bernoulli.

You can go back to the late 15th century to find mathematicians with "unknown" advisors.


> There is a ceremony later this year, but it's probably too soon.

Maybe this is what you meant by "it's probably too soon", but almost certainly the decisions about who gets it have already been made.


My bets are on another analytic number theorist for the Fields this cycle, namely James Maynard.

Paul Nelson aka czm of quake 3 pro scene! https://www.youtube.com/watch?v=PcbpIntnG8c

I don't believe it... it's really him https://www.youtube.com/watch?v=ZHpG135ge7Y !

Holy crap! I had no idea. I was a big follower of the quake 3 and ut2k4 scene back in the day.

Is there a reason we're obsessed with primes beyond aesthetics? Why does this set of numbers garner all the headlines as opposed to some other arbitrary integer sequence like the Recamán numbers [0] ?

If tomorrow someone discovered a closed-form equation for the nth prime, how would mathematics/the world change?

[0] https://en.wikipedia.org/wiki/Recamán%27s_sequence


Yes. Just within mathematics prime numbers have a tendency to show up in many fields not obviously related to number theory. As an example: in the study of symmetries (group theory [0]) important to number theory, geometry and even many parts of physics prime numbers help us decompose a complicated symmetry into simpler ones (using Sylow theorems [1].

Prime numbers are also used for cryptography due to their computational properties.

Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime.

But there are more headlines about the primes relative to its prevalence in research. I would guess that this is because - they can easily be explained to people - have been studied for thousands of years - open problems still persist

In other parts of research you might need at least an undergraduate degree just to understand the definitions, which makes headlines less sexy.

[0] https://en.wikipedia.org/wiki/Finite_group

[1] https://en.wikipedia.org/wiki/Sylow_theorems


I think it's about the fact that they are a mystery at the heart of maths. You can start describing maths from the unit, the addition and the negative sign. If you start combining those, you get the natural numbers, the integers, ... but even before the natural numbers come the prime numbers. Primes are the most fundamental set of numbers in mathematics, from which you can generate the natural numbers.

But as you say, even after so many years they are still relevant, useful and mysterious. They are on a wildly different category from other sets and numerical series. They are the most central element of maths that we still don't understand. And central means that so many other parts of maths derive from it, and therefore we end up coming across prime numbers everywhere. We use them to analyze so many other parts of maths, but yet they remain elusive to analysis themselves. It's a fundamental, recurrent mystery that's also an extremely useful tool... one of the most beautiful things we know.


> you get the natural numbers, the integers, ... but even before the natural numbers come the prime numbers. Primes are the most fundamental set of numbers in mathematics, from which you can generate the natural numbers.

I don't really see how you can define prime numbers before the natural numbers.


The set of natural numbers contains the prime numbers. The set of prime numbers doesn't contain the natural numbers, but every natural number > 1 can be generated/described as a product of prime numbers (fundamental theorem of arithmetic).

Maybe my terminology was incorrect, I'm not good at maths, but that's what I meant.


"Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime."

I don't think this is correct. For one, it is not clear what "closed form" would mean in this context. I think a reasonable variant would be "is there a polynomial time algorithm that, given n, outputs the nth prime." While my guess would be that the answer is no, I am certain that this is not known (and probably far, far out of reach).


Anyone else wondering more details:

https://www.quora.com/Is-finding-prime-numbers-in-P-or-NP

Interestingly the Polymath4 project in/around 2009 attempted to find such a polynomial algorithm.


Note that finding "a k-digit prime" is not the same as finding "the kth prime." I don't have any strong feelings either way about whether the former would be possible in polynomial time or not, but I would be pretty shocked if the latter were.

> Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime.

Do you have a reference? This sounds interesting.

There are, of course, integer polynomials where the set of positive values is precisely the set of primes (think something like x²(y+1)-(z+x)y, but much more complicated) [2].

(This might seem like an interesting fact, but it really is not. All sets of numbers where a computer can decide whether a number belongs to the set have such a polynomial.)

[2] https://en.wikipedia.org/wiki/Formula_for_primes


What impact would this have on elliptic curve cryptography?

I am not a mathematician or cryptographer, but here's my understanding (please, please correct me if I'm wrong)

Modern cryptography relies on the hardness of integer factorization. Things you want to hide are intractable to calculate on even the most powerful machines due to the underlying math not being workable in polynomial time (or similar low degree time). Age of the universe hard to brute force with our machines and known algorithms.

It has to do with the complex structure of the problem in our intuitive dimension. We haven't thought of any possible way to speed it up classically, and it's not apparent if there are such techniques.

The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick.

If the hypothesis is true, it's quite possible that NP hard problems are the same. That in some unintuitive higher dimension math we can solve the hardest problems in polynomial time.

NP-hard problems are also proven to be equivalent complexity, and if you figure out one then the rest are also solvable. If we figure out the trick behind it all (if such a thing exists), we might break everything. Or maybe we'll prove that there is no such free lunch.

I'm one of the ones hoping for computing to be easy. It would unlock so many possibilities. Fast math, in silico experiments, protein folding, etc. It unfortunately would also break all of the underpinnings of our industry, though. SSL, privacy, crypto, infosec, everything about the Internet...

Most people are betting that primes are hard, and it certainly does look that way.


> Modern cryptography relies on the hardness of integer factorization... P vs NP

This is not true for elliptic curve cryptography, or exactly true for RSA et al. ECC is based on the hardness of the elliptic discrete logarithm problem which is not exactly the same as solving integer factorization. However both can be tackled in polynomial time using Shor's algorithm [0], so both will be vulnerable once we have quantum computers. There are a handful of promising post quantum approaches, such as lattice cryptography. You don't need to solve P vs NP to break modern cryptography, you just need a computer that can run Shor.

[0] https://eprint.iacr.org/2017/598.pdf


"The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick."

RH does not imply this (under any reasonable interpretation of what you wrote; e.g. RH would not imply a fast algorithm that given n outputs the nth prime).

The prime number theorem says that the number of primes up to x (denoted pi(x)) is "well approximated" by Li(x). The Riemann hypothesis says that this is a REALLY good approximation. In particular, it says that the error incurred in approximating pi(x) by Li(x) is of size about sqrt(x). (More precisely sqrt(x)log(x).)


There is actually one interesting connection between RH and integer factorization. This has to do with the idea of "Mobius pseudorandomness." For simplicity, I'll phrase things in terms of not Mobius, but the Liouville function lambda.

The Liouville function comes up naturally if you are interested in whether the "typical" number has an even or odd number of prime factors. More precisely, define the function lambda(n) (the "Liouville function") as follows. lambda(n) = 1 if n has an even number of prime factors, and -1 if n has an odd number of prime factors. If you expected that about half of the positive integers have an even number of prime factors, then you would expect that lambda is 0 on average, i.e. that |(1/x) * sum_(n <= x) lambda(n)| tends to zero as x tends to infinity.

It turns out that this statement is equivalent to the prime number theorem. Further, it turns out that the Riemann Hypothesis is equivalent to the STRONGER statement that the size of the sums |sum_(n<=x) lambda(n)|, where lambda(n) is the "Liouville function," don't exceed size about sqrt(x). This is exactly what you would expect from the Central Limit Theorem if you thought that lambda(n) behaved like a Bernoulli random variable taking the values +1,-1.

What does this have to do with factorization? It's easy to compute lambda if you know how to factor n, so if computing lambda is certainly no harder than factoring. On the other hand, we don't know of any way to compute the Liouville function without factoring n. This isn't a proof that computing lambda(n) is at least as hard as factoring n, but it certainly seems to be the case.

Now, one rigorization of what it means for a sequence to behave randomly is that it doesn't correlate with any easily computable sequences. (Replace "easily computable" with "computable" and you are not too far from the definition of Martin-Lof randomness.) In particular, it shouldn't be easily computable.

So in other words, if you think that lambda behaves randomly, then it shouldn't be easily computable, which in turn means that factoring is hard!

But as mentioned above, one of the simplest consequences of lambda behaving randomly is (by the Central Limit Theorem) the Riemann hypothesis!


Hmmm? Is someone experimenting with GPT-3 commenting on NH? Because it sure feels like it - the comment is mostly grammatical, uses approximately correct jargon, and completely nonsensical.

Hello Ilya, if you're wondering why your comment isn't appreciated, it is because you are breaking the rules.

Please do not accuse people of being bots or shills, and if you have proof that someone is a bot (or an actual shill) then mail hn@ycombinator.com

Have a look at:

https://news.ycombinator.com/newsguidelines.html , the bot bit isn't explicitly mentioned but "Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like." covers it nicely with the 'and the like' bit.


Thanks for clarification and pointing to the forum guidelines. I hoped I was being charitable, trying to explain away the cluelessness of the comment.

You're welcome.

The thing is HN has lots of people who do not have English as their first language and the bulk of these comments would be directed at people doing their level best to get it right. And being wrong in itself is a good basis for a correction rather than a put down. See the other comments to your parent comment for examples of how you could handle that with more grace.


i have no idea why this great comment got down voted.

Elliptic curve cryptography does not rely on the difficulty of prime factorization. That is likely why that comment you are replying to is downvoted; that comment doesn't answer the question that it is replying to.

For an intro to the math behind elliptic curve cryptography, here's a good read: https://hackernoon.com/what-is-the-math-behind-elliptic-curv...


this is new to me. i appreciate the response

Lazy trolling.

I suspect somewhere in the Riemann-Zeta function is a reciprocal that only cancels out all the imaginary terms once and only once, thus any number with more than one factor wouldn't cancel out.

I haven't the math skills to find that reciprocal.


I love how every quote by the mathematician clearly states that this does not bring us appreciably closer to the Riemann hypothesis, or that it is very unlikely, while the bulk of the article seems to imply otherwise.

While I had a couple of gripes with the article, I don't get the sense that the article exaggerates the significance of this work. The (Generalized) Riemann Hypothesis implies the (Generalized) Lindelof Hypothesis. The latter is a particularly important consequence of GRH. Not only is it worthwhile, if you are interested in GRH, to verify its predictions (if you proved one of the predictions false, then you would disprove GRH, and if you proved it true, this would in some sense provide "theoretical support" for GRH), but Lindelof is useful in is own right. In particular, it can sometimes serve as a substitute for GRH, and results that can substitute for GRH (such as Bombieri-Vinogradov, zero-density estimates, and subconvexity estimates) are very valuable. Contexts where subconvexity estimates can serve in place of GRH include Quantum Unique Ergodicity, and representations of integers by ternary quadratic forms.

For this reason, this paper is a much bigger deal (in the analytic number theory world / in my opinion) than the paper a couple years back that made big headlines about Jensen polynomials (and Nelson's paper is surely one of the biggest results in the last couple of years in analytic number theory). While the latter was more of a "curiosity" / cute paper on an obscure way of thinking about RH, Nelson's paper is big progress on a well-studied weaker form of GRH with significant utility for applications. (I mean, of course, applications in mathematics.)


(To expand on this a bit: the reason that GRH is so important is because of its ubiquitous applications in mathematics. Maybe you can think of these applications as different food items, whose recipes call for GRH. The problem is that GRH is a very hard ingredient to come by! (That is, if you want your result to be "unconditional," in the sense of not assuming anything unproven.)

In some of these cases, however, (such as the two mentioned in the comment above), substituting the Generalized Lindelof Hypothesis in place of GRH (remarkably) would give you a result just as good! (Think "I can't believe it's not GRH!")

Unfortunately, Lindelof is also presently out of reach in almost all contexts. Subconvexity estimates are partial progress towards Lindelof. If you substitute them in place of GRH in your recipes, you'll get something better than if you used the cheapest ingredient (the convexity bound), but not something quite as good as if you had used Lindelof or GRH.

Some of the most important results in analytic number theory are "I can't believe it's not GRH!" in their respective contexts. Here is one such example: For applications in sieve theory (e.g., the 2013 breakthrough work on bounded gaps in primes), the Bombieri-Vinogradov theorem is a substitute for GRH that is just as good as GRH.)


probably different than the author from the article, the Mathematician wasn't interested in generating clicks.

Scientific journalism. Buzzwords, speculations that don’t sound plausible to the scientific community, etc

I don't think that's fair. Much scientific journalism, by focusing on exciting unexpected results, leaves one with an impression that there is too much dogma and perhaps other errors.

Math is very hard, but has the luxury of correctness being better defined and easier to verify than about any other field. There simply isn't as much room for conspiracy thinking. And accordingly, the articles in Quanta Magazine are both great to read and don't have a side effect of causing any outsider smugness.

That isn't to say math doesn't have controversies. The utility of proof assistance and degree to which people should study and use unifying frameworks like Category Theory are examples of ongoing controversies. But it's very fortunate that to be immune to issues like widespread replication crises.

Math is really really different than Science.


I'm not aware of category theory being controversial. A much better example would be Mochizuki's IUT controversy.

It's not controversial in terms of correctness, but it is still a bit controversial in terms of whether it's worth the effort?

See all those "but does this help me get results for existing problems?" type questions. There have always been mathematicians that prefer to first establish more structure vs those that just want to get down to business. Category theory especially in the past really pushed the envelope on the "more structure" side of things, inflaming those old tensions.


This fascination with primes is has, of late, seemed to be a little misplaced to me. Primes emerge as a property of the number system that you have.

The number system itself is an an approximation of the natural world where actions like addition, subtraction, multiplication and division exist. The natural world in question is, as far as we know today, what we humans observe.

Now, imagine a second world which was excellent at approximations to the extent that their entire Standard model depended on differentiation and integrations. Would they care as much about primes and try to fit everything into “beautiful” math? I don’t know the answer to this but I’ve found this thought experiment useful to put our maths in its place.


I don't see how you could model the world without addition or subtraction. Even if you started out with differentiation, wouldn't addition and subtraction be implicitly defined by it? And once you have those, you have multiplication and division as well, and primes.

I don't think we have even managed to imagine a world where the natural numbers wouldn't be a useful building block.

That means the natural numbers are a fundamental logical building block of any formal system like mathematics, doesn't matter what the laws of physics are.


It might be that the Riemann Hypothesis will be the first of Math's "Hard Problems" that will be solved by AI before humans...

It might. However, since there's no record of AI solving big, open conjectures such as the Riemann Hypothesis, I find this quite unlikely.

Well, you don't know how many years humans will need in order to solve it ;-).

Starting to think quantamagazine writes headlines this vague as a conscious clickbait strategy by now because these pull the biggest audience.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: