Second, the subtitle "Paul Nelson has solved the subconvexity problem..." is strange. The subconvexity problem, for a given L-function, is to lower the percentage described above from 25% to "any positive number"; in other words, to bridge the gap between the convexity bound (which is "trivial") and Lindelof (a consequence of GRH). The only way that the statement "Paul Nelson has solved THE subconvexity problem..." could maybe be accurate is if Nelson proved the Lindelof hypothesis for "all" L-functions. Which is far from the case. (What makes subconvexity so interesting, as Nelson says in the article, is that it is a problem where you can make partial progress towards a particularly important consequence of GRH. And Nelson's result is exactly that: partial progress.) More accurate would be "Paul Nelson has made significant progress on the general subconvexity problem."
The attraction of Quanta has always been that its writers seemed to have an intrinsic passion for math, treating it as more than just a gee-whiz topic to gloss over in between the latest panicked missives about politics and how cellphones are destroying our children. One of its most delightful qualities has been the consistent willingness to take a few extra sentences to explain even advanced concepts in a manner that is basically technically correct.
Certainly the article remains far better than the average mass media STEM writing, but Quanta should take greater care to keep their quality pristine. They have been utterly unique and have set the example for everyone else.
The first one is a bit worse. I think if I had read this knowing nothing about convexity I would have gotten the wrong idea from the arbitrary choice of 1%. I understand the desire to simplify, but it is an art to simplify while keeping what you say technically correct. Quanta usually does an excellent job of this. I wouldn't say that the first point above is an egregious error by any means, but I think it is a slip.
Given that, it seems fair-ish to say that Nelson solved the Subconvexity Problem. You just have to understand that the problem is really a family of problems of increasing hardness (e.g. prove tighter and tighter bounds), and solutions more powerful than Nelson's may come later.
I still think it's vague to talk about "the" subconvexity problem without specifying what variable you want the bound to be subconvex in, but, really, who am I to argue with Ian Petrow..!
To be fair, I think there are economic conditions that drive the unfortunate situation. Quanta is very lucky to have enormous free support from a billionaire-funded foundation (Simons Foundation).
When publications become desperate for cash, they dive into the politics and culture war rathole.
My understanding is that Nelson has indeed done this for every L-function, although the exact x varies with the L-function. How is that not solving the subconvexity problem?
But this is not (I think) how most people would speak, and I wouldn't say this. I would say the subconvexity problem is the problem of bridging the gap between 25% (convexity) and 0% (Lindelof). Subconvexity bounds for various L-functions (zeta, Dirichlet L-functions, and higher rank L-functions) are a very active area of research, and so it seems strange to me to say "the subconvexity problem is solved."
"My understanding is that Nelson has indeed done this for every L-function." One minor note. Nelson's work applies if the L-function is automorphic. By the Langlands philosophy, this should be true for any reasonable L-function, but this is far from known, even in, e.g., the simple case of the L-function corresponding to the tensor product of two automorphic L-functions.
Edit: I am wrong about the statement "this is not (I think) how most people would speak." It looks like beating the convexity bound at all is often described as "solving the subconvexity problem," e.g. in the introduction here https://link.springer.com/article/10.1007/s002220200223. This description is a bit strange to me, but if it is standard then it is unfair for me to call it an "inaccuracy," persay; thanks for pointing this out.
The first, pointed out already by gavagai691, is that it is not known that "every L-function" is "standard" (this is a major open problem in the Langlands program).
The second is that the general "subconvexity problem" asks for a nontrivial estimate not only for each fixed L-function as its argument varies, but more generally as the L-function itself varies. The general case of the problem remains open.
Is there are reason or benefit of stating it like that rather than saying the fraction of output and input digit size asymptotically approaches zero? Or did I misunderstand the explanation?
What Lindelof says more precisely is that for any ε > 0 and any L-function L(s), there is a constant C_(ε,L) (depending only on ε and L, but crucially not on s) such that |L(s)| <= C_ε (1+|Im(s)|)^ε.
I was in awe at how many primes I could find on my cyrix 333Mhz on my own noddy code
Left it running all day when at school.
Never found any thing of value but I learned how to Beowulf cluster and learned a lot about arbitrary integer implementation
Awesome your parents got you a PC, we would have died for one if it existed then!
Though I do have some recollection of hitting account limits on compute time on the CDC machine. My dad never mentioned any overages though. He was working in CDC assembler building a simulator for something with the odd name (at the time) of a "microcontroller". It was a profs project that needed an emulator so that was his term project.
I'm fairly sure i did develop them but just deleted as i thought nobody would want that
in retrospect teenage me should prob have told other people about my websites
Also, according to MathGenealogy, his grand-grand-grand-grand-grand-grand-grand-grand thesis advisor was none other than Poisson himself. Poisson's advisers were Lagrange and Laplace. Lagrange's was Euler. Euler's was Bernoulli. Bernoulli's was another Bernoulli.
You can go back to the late 15th century to find mathematicians with "unknown" advisors.
Maybe this is what you meant by "it's probably too soon", but almost certainly the decisions about who gets it have already been made.
If tomorrow someone discovered a closed-form equation for the nth prime, how would mathematics/the world change?
Prime numbers are also used for cryptography due to their computational properties.
Also I think we have shown using a variant of Galois theory that there can not be a closed from solution for the nth prime.
But there are more headlines about the primes relative to its prevalence in research. I would guess that this is because
- they can easily be explained to people
- have been studied for thousands of years
- open problems still persist
In other parts of research you might need at least an undergraduate degree just to understand the definitions, which makes headlines less sexy.
But as you say, even after so many years they are still relevant, useful and mysterious. They are on a wildly different category from other sets and numerical series. They are the most central element of maths that we still don't understand. And central means that so many other parts of maths derive from it, and therefore we end up coming across prime numbers everywhere. We use them to analyze so many other parts of maths, but yet they remain elusive to analysis themselves. It's a fundamental, recurrent mystery that's also an extremely useful tool... one of the most beautiful things we know.
I don't really see how you can define prime numbers before the natural numbers.
Maybe my terminology was incorrect, I'm not good at maths, but that's what I meant.
I don't think this is correct. For one, it is not clear what "closed form" would mean in this context. I think a reasonable variant would be "is there a polynomial time algorithm that, given n, outputs the nth prime." While my guess would be that the answer is no, I am certain that this is not known (and probably far, far out of reach).
Interestingly the Polymath4 project in/around 2009 attempted to find such a polynomial algorithm.
Do you have a reference? This sounds interesting.
There are, of course, integer polynomials where the set of positive values is precisely the set of primes (think something like x²(y+1)-(z+x)y, but much more complicated) .
(This might seem like an interesting fact, but it really is not. All sets of numbers where a computer can decide whether a number belongs to the set have such a polynomial.)
Modern cryptography relies on the hardness of integer factorization. Things you want to hide are intractable to calculate on even the most powerful machines due to the underlying math not being workable in polynomial time (or similar low degree time). Age of the universe hard to brute force with our machines and known algorithms.
It has to do with the complex structure of the problem in our intuitive dimension. We haven't thought of any possible way to speed it up classically, and it's not apparent if there are such techniques.
The Riemann Hypothesis conjectures that there is a way to know the distribution of primes in a higher dimensional imaginary math. That the unintuitive and difficult system that gives us primes in real number space is somehow contained in a larger imaginary space, with some kind of regular structure, and that they are immediately accessible and calculable once you know the trick.
If the hypothesis is true, it's quite possible that NP hard problems are the same. That in some unintuitive higher dimension math we can solve the hardest problems in polynomial time.
NP-hard problems are also proven to be equivalent complexity, and if you figure out one then the rest are also solvable. If we figure out the trick behind it all (if such a thing exists), we might break everything. Or maybe we'll prove that there is no such free lunch.
I'm one of the ones hoping for computing to be easy. It would unlock so many possibilities. Fast math, in silico experiments, protein folding, etc. It unfortunately would also break all of the underpinnings of our industry, though. SSL, privacy, crypto, infosec, everything about the Internet...
Most people are betting that primes are hard, and it certainly does look that way.
This is not true for elliptic curve cryptography, or exactly true for RSA et al. ECC is based on the hardness of the elliptic discrete logarithm problem which is not exactly the same as solving integer factorization. However both can be tackled in polynomial time using Shor's algorithm , so both will be vulnerable once we have quantum computers. There are a handful of promising post quantum approaches, such as lattice cryptography. You don't need to solve P vs NP to break modern cryptography, you just need a computer that can run Shor.
RH does not imply this (under any reasonable interpretation of what you wrote; e.g. RH would not imply a fast algorithm that given n outputs the nth prime).
The prime number theorem says that the number of primes up to x (denoted pi(x)) is "well approximated" by Li(x). The Riemann hypothesis says that this is a REALLY good approximation. In particular, it says that the error incurred in approximating pi(x) by Li(x) is of size about sqrt(x). (More precisely sqrt(x)log(x).)
The Liouville function comes up naturally if you are interested in whether the "typical" number has an even or odd number of prime factors. More precisely, define the function lambda(n) (the "Liouville function") as follows. lambda(n) = 1 if n has an even number of prime factors, and -1 if n has an odd number of prime factors. If you expected that about half of the positive integers have an even number of prime factors, then you would expect that lambda is 0 on average, i.e. that |(1/x) * sum_(n <= x) lambda(n)| tends to zero as x tends to infinity.
It turns out that this statement is equivalent to the prime number theorem. Further, it turns out that the Riemann Hypothesis is equivalent to the STRONGER statement that the size of the sums |sum_(n<=x) lambda(n)|, where lambda(n) is the "Liouville function," don't exceed size about sqrt(x). This is exactly what you would expect from the Central Limit Theorem if you thought that lambda(n) behaved like a Bernoulli random variable taking the values +1,-1.
What does this have to do with factorization? It's easy to compute lambda if you know how to factor n, so if computing lambda is certainly no harder than factoring. On the other hand, we don't know of any way to compute the Liouville function without factoring n. This isn't a proof that computing lambda(n) is at least as hard as factoring n, but it certainly seems to be the case.
Now, one rigorization of what it means for a sequence to behave randomly is that it doesn't correlate with any easily computable sequences. (Replace "easily computable" with "computable" and you are not too far from the definition of Martin-Lof randomness.) In particular, it shouldn't be easily computable.
So in other words, if you think that lambda behaves randomly, then it shouldn't be easily computable, which in turn means that factoring is hard!
But as mentioned above, one of the simplest consequences of lambda behaving randomly is (by the Central Limit Theorem) the Riemann hypothesis!
Please do not accuse people of being bots or shills, and if you have proof that someone is a bot (or an actual shill) then mail firstname.lastname@example.org
Have a look at:
https://news.ycombinator.com/newsguidelines.html , the bot bit isn't explicitly mentioned but "Please don't post insinuations about astroturfing, shilling, brigading, foreign agents and the like." covers it nicely with the 'and the like' bit.
The thing is HN has lots of people who do not have English as their first language and the bulk of these comments would be directed at people doing their level best to get it right. And being wrong in itself is a good basis for a correction rather than a put down. See the other comments to your parent comment for examples of how you could handle that with more grace.
For an intro to the math behind elliptic curve cryptography, here's a good read: https://hackernoon.com/what-is-the-math-behind-elliptic-curv...
I haven't the math skills to find that reciprocal.
For this reason, this paper is a much bigger deal (in the analytic number theory world / in my opinion) than the paper a couple years back that made big headlines about Jensen polynomials (and Nelson's paper is surely one of the biggest results in the last couple of years in analytic number theory). While the latter was more of a "curiosity" / cute paper on an obscure way of thinking about RH, Nelson's paper is big progress on a well-studied weaker form of GRH with significant utility for applications. (I mean, of course, applications in mathematics.)
In some of these cases, however, (such as the two mentioned in the comment above), substituting the Generalized Lindelof Hypothesis in place of GRH (remarkably) would give you a result just as good! (Think "I can't believe it's not GRH!")
Unfortunately, Lindelof is also presently out of reach in almost all contexts. Subconvexity estimates are partial progress towards Lindelof. If you substitute them in place of GRH in your recipes, you'll get something better than if you used the cheapest ingredient (the convexity bound), but not something quite as good as if you had used Lindelof or GRH.
Some of the most important results in analytic number theory are "I can't believe it's not GRH!" in their respective contexts. Here is one such example: For applications in sieve theory (e.g., the 2013 breakthrough work on bounded gaps in primes), the Bombieri-Vinogradov theorem is a substitute for GRH that is just as good as GRH.)
Math is very hard, but has the luxury of correctness being better defined and easier to verify than about any other field. There simply isn't as much room for conspiracy thinking. And accordingly, the articles in Quanta Magazine are both great to read and don't have a side effect of causing any outsider smugness.
That isn't to say math doesn't have controversies. The utility of proof assistance and degree to which people should study and use unifying frameworks like Category Theory are examples of ongoing controversies. But it's very fortunate that to be immune to issues like widespread replication crises.
Math is really really different than Science.
See all those "but does this help me get results for existing problems?" type questions. There have always been mathematicians that prefer to first establish more structure vs those that just want to get down to business. Category theory especially in the past really pushed the envelope on the "more structure" side of things, inflaming those old tensions.
The number system itself is an an approximation of the natural world where actions like addition, subtraction, multiplication and division exist. The natural world in question is, as far as we know today, what we humans observe.
Now, imagine a second world which was excellent at approximations to the extent that their entire Standard model depended on differentiation and integrations. Would they care as much about primes and try to fit everything into “beautiful” math? I don’t know the answer to this but I’ve found this thought experiment useful to put our maths in its place.
That means the natural numbers are a fundamental logical building block of any formal system like mathematics, doesn't matter what the laws of physics are.