> There are infinitely many primes that can be formulated by squaring two whole numbers and adding them together. [...] By insisting that one of the numbers you’re squaring be odd, perhaps [...] makes the problem much harder.
Does it? For any number a, a^2 = a (mod 2), and primes greater than 2 are all odd, so if a prime p = a^2 + b^2, doesn't one of a or b have to be odd? Reducing mod 2, p = 1 (mod 2), a^2 + b^2 = 1 (mod 2), a + b = 1 (mod 2), so either a = 0 (mod 2) and b = 1 (mod 2) or vice-versa?
edit:
If Euler proved infinitely many such primes exist then "With this in hand, Green and Sawhney proved Friedlander and Iwaniec’s conjecture: There are infinitely many primes that can be written as p^2 + 4q^2." makes no sense without a further condition of p or q, let (in my notation) a=p be odd and b=2q be even.
Now having finished the article, I think this was just sloppy writing, and the actual accomplishment is related to the post-perhaps clause: one of p or q has to itself be a perfect square? Anyway, I have very little certainty about what was actually accomplished from reading this article.
They state that condition when they introduce the p^2 + 4q^2 condition, but at the point that GP quoted ("one must be odd"), they had only referred to them as "numbers" and "whole numbers". So it's not clear whether the article considers p and q being prime as a condition on p^2 + q^2 or not. GP's point is valid.
I disliked the way this article is written, it reminds me of "we had to rollback to the previous UI because the new one was too efficient that people spent less time on the website".
> Proposals should be aligned with one of the following categories:
> Production grade software tools: AI for auto-formalization, proof generation, synthesis of verifiable code, and more
> Datasets: Open-source collections of theorems, proofs, and math problems
Field building: Textbooks, courses, and resources to grow the AI-for-math community
> Breakthrough ideas: High-risk, high-reward approaches to AI-driven math research
> the primes aren’t random. They’re completely determined
I wonder how true this statement is but it probably also relies on the understanding of the word 'random'. In the colloquial sense, it is certainly true in the sense that (truly) 'random' means 'occurring without (discernible) rules'.
However, in a stricter definition 'random' means (if I'm not mistaken, hobby-thinker here) just "a given set of numbers S is called random with respect to a set of binary tests T when all individual procedures in T yield a positive outcome". That is, you can only ever a finite set S and test against a finite set T, meaning the algorithm that generated your random-looking set S can, potentially, always be amended to creep closer to overcome the failed tests (and the reverse is also true: one can, potentially, always move the goalposts and add another test to T to make a set fail that used to pass).
Ultimately, then, randomness—where it is not occurring eo ipso as in radioactive decay—is always a relative (S WRT T), social (people must agree), and finite (can't test against unseen members of S and, per precondition, can't give a rule to cover infinity as in 'divisible by eleven') procedure (that may or may not practically terminate). In other words, once we settle on a given set of tests T to determine what is random, one can (potentially) always come up with an algorithm that passes the tests, thus looks random though it is determined.
I think that just shows that those tests for randomness can just tell you when something is not random, not when it's actually random. There are problems, I think, about giving a rigorous definition of random, but I think most would agree that if you have an algorithm that predicts with certainty what the next element in a sequence will be, then that sequence isn't random.
Then what about the digits of π? Wikipedia says that "The decimal digits of π appear to be randomly distributed,[a] but no proof of this conjecture has been found"; note [a]: "In particular, π is conjectured to be a normal number, which implies a specific kind of statistical randomness on its digits in all bases."
So that sounds pretty random to me, yet there are algorithms that give you π in its decimal form, and as far as I can remember there are even ways to compute the n-th digit of π without having to compute the preceding ones—which sounds pretty determined to me.
The digits of pi can pass some statistical tests of randomness but isn't in fact random, because it's the digits of pi - as you note there's an algorithm for producing it. As I mentioned, there are some problems with giving a rigorous definition of randomness, but one of the main approaches is through Kolmogorov complexity: "Kolmogorov's definition of a random string was that it is random if it has no description shorter than itself via a universal Turing machine."
https://en.wikipedia.org/wiki/Random_sequence#Modern_approac...
This is a huge result! It seems like it could be a non-trivial amount of the way towards solving the general type of problem of proving that the primes show up with expected density for most "normal" sets of polynomials.
by this I meant things like proving that for any polynomial (e.g. x^3+1) there are infinite primes of that form (as long as the polynomial doesn't factor), and that their density corresponds to what you'd expect from the prime number theorem.
let’s not ignore the practical side. Algorithms for studying primes drive advances in computing, machine learning, and data science. Cryptography literally depends on them. Plus, big unsolved problems like the Riemann hypothesis could completely reshape number theory.
Green and Sawhney’s work is especially exciting because it shows how tools from one field—Gowers norms—can unlock progress in another. That kind of cross-disciplinary insight is where breakthroughs happen. And yeah, it’s fair to question funding priorities, but basic research has given us antibiotics, GPS, and even computers. Without it, we’d still be in caves.
Cryptanalysis relies on deep conjectural heuristics in analytic number theory. These conjectures becoming theorems wouldn't affect cryptanalysis at all, because their validity is already baked in. If, however, any of these conjectures turn out to be false, there would be ramifications.
In the recent past, I've accepted that all titles are not informative anymore. But that there was hope that the subtitles were actually informative (i.e. the subtitle was the real title, and the title was the clickbait).
In this article, neither is informative.
And even after several paragraphs in, you don't know what the general area of the proof is. Just meandering long-winded story-telling.
If anyone of you authors/editors of this magazine are here; please, for the love of all that's holly, put the crux of the matter at the top and then go off to tell your beautiful, humanized, whatever... story.
Prime numbers, in their spirit, are like decomposing a problem into independent smaller problems. It is a search for a divide and conquer algorithm. So when people learn how to decompose and put together new problems there is a source for new knowledge. One of the most trivial examples is how if you can factorize a polynomial into factors you have a simple way to solve for the roots of that polynomial by solving the smaller problem of finding the roots of the factors and the union of all roots is the root of the initial polynomial. In that example the three parts: decompose, solve subproblems and finally compose the final solution are clearly exposed.
Nowadays we are wondering if LLM are going to be the next prime numbers. The question is if solving the language problem is going to provide us the key for solving the AGI problem. We still don't know what is the equivalent of a prime for a LLM, that is the smaller independent part that allow it to express some knowledge, the pieces could be embeddings or the topology of the layers or some new insight.
Some more random ideas, just like Norvig see python as a more practical Lisp, the basic ideas from prime numbers impregnate a great part of mathematics. You can be really far from the root but the principles are always with you, primes are a second nature, you have internalized all their properties and dynamically you learn to see primes like features in many fields (prime ideals, spectrum of a ring, points in commutative algebra).
The problem of how many primes (in relation to positive integers) is like wondering whether a given decomposition exists in a general sense that could allow us to solve the general problem. So few primes implies that the theory could solve some kind of problems but not many. What are LLMs number?, that is the question if solving the language problem will allow us to solve some very general problems, like a good approximation to AGI. The problem of whether LLMS will open the way to AGI could be the next Riemann hypothesis once we succeed in defining what is a prime number in relation to a LLM.
We are trying to prove escalating laws for how LLM improve when increasing the number of parameters, this is like trying to guess the convergence of an infinite serie by using a finite sum. The analogous of a Riemann hypothesis could be defining certain kind of LLM and conjecturing if it could obtain AGI pass some threshold for the number of parameters.
Edited: Sorry for being overly verbose this mornig!
Quote: 'Prime numbers, in their spirit, are like decomposing a problem into independent smaller problems. So when people learn how to decompose and put together new problems there is a source for new knowledge.'
Quanta Magazine tries to make these topics accessible for people with low knowledge regarding them, thus mentioning the Riemann hypothesis, although adequate and needed for, say, a lecture on the topic, would not really help this goal.
But the primes are obviously not random, independently of the Riemann hypothesis - they're determinate consequences of the number system. Maybe I missed your point
In 1933, A. Kolmogorov used H. Lebesgue's measure theory to define random variables. With that definition, could have a random variable X whose values are only prime numbers and such that for each prime number p and for probability measure P,
P(X = p > 0),
that is, the probability that X = p is positive.
So, with X, the prime numbers are random.
References:
With TeX markup, polished details on measure theory are in
H.\ L.\ Royden, {\it Real Analysis: Second Edition,\/} Macmillan, New York, 1971.\ \
Walter Rudin, {\it Real and Complex Analysis,\/} ISBN 07-054232-5, McGraw-Hill, New York, 1966.\ \
and polished details on probability theory based on measure theory are in
Leo Breiman, {\it Probability,\/} ISBN 0-89871-296-3, SIAM, Philadelphia, 1992.\ \
Jacques Neveu, {\it Mathematical Foundations of the Calculus of Probability,\/} Holden-Day, San Francisco, 1965.\ \
Uh, Joe, you have random variable X. What is its value?
Sam, just a minute. Let me draw a sample. Got one: X = 7.
But, Joe 7 is not very surprising or interesting. Is there anything else?
Sam, sure, one more minute. Got one!
Joe, well, then, WHAT is it????
Sam, sorry, it has 2^12345 digits and will take a while to print it out; this morning I have a coffee shop meeting with Susan; and I don't want to miss Susan, cute, pretty, sweet, smart, darling, adorable, precious ..., and single!
Exercise: Show that there is a random variable Y with the same distribution as X and such that X and Y are independent random variables. I.e., knowledge of X tells us nothing about Y.
The intuitive concept of random is closer to unpredictable given even everything else, that is, what probability theory defines as independent.
There are more details in Royden, Rudin, Breiman, and Neveu. To preview, there is a non-empty set Omega with a collection F of subsets that form a sigma algebra and a measure P on F. Then random variable X is a measurable function from Omega to the set of all prime numbers. So, for some point w in Omega and function X, X(w) is a prime number. Can think of w as a trial.
Uh, this morning I'm working on some Rexx code, so here I can't reproduce or compete with the references by Royden, Rudin, Breiman, and Neveu.
People are interested in things and like spending their time on it, that is called "living". Yet other people make careers out of expanding the bounds of knowledge for humanity, often with no clear application in mind, this is called foundational research.
Sometimes if we are lucky either one of those yields phenomenal practical applications, just because some nerd thought there was a missing piece in the puzzle and they ought to find it.
I know many nowadays believe that the sole goal of humanity ought to be the increase of shareholder value, even if said increase is at odds with human survival on this planet. Then 99% of us just exist, work our asses off, with little to no time spent with our loved ones while leaving the planet and humanity in a worse state than previous generations — and then we die.
Scientific work is too often challenged with this kind of question. If all you care about is results you know will happen you will never discover anything you don't allready know.
Yes to you and the person you are responding to! And the boorishness here is coming from a "tech person" [0], no less.
What have the technologically capable people who were the ones architecting these systems the past few decades given the world: a handful of Big Tech behemoths, with all the terrifically negative, stultifying effects that has had. The computing world has been willfully fragmented, and the landscape is awash with casualties; namely, every person out there who is terrified of their computing devices, who panics when the first pop-up screen appears.
Which is surely the minority on here, but in the big bad world, I would guess it's easily a majority of people.
And then the computer types have the gall to ask what number theory has done for humanity..!
The following should go without saying, but let's say it anyway: just because tech-y startups continue to attract historic levels of investment, doesn't necessarily mean that the stuff the tech world produces is in anyway useful or special or good or interesting. If you're not going to read a book or something on the topic (number theory), at least browse a couple of wikipedia articles, or get an LLM to summarise it for you, or something.
[0] I'm guessing this entirely from the tone and the forum we're on. Please tell me if I'm guessing incorrectly!
Dismissing purpose, and living in aimless complacency, is also boorish.
I don't want to come off as anti-pig here, pigs are OK. Number theory is OK too, it's probably the branch of mathematics I dislike least. But it's laudable to sometimes ask "what is it all for?", without wanting to attack or threaten anybody's occupation. No easy answer is available, but it's worth asking.
> No easy answer is available, but it's worth asking.
Beyond "expanding our knowledge about 'x'", a lot of disciplines don't have much of an answer, so I don't see the value in asking the question apart from trying to dismiss the relevant discipline.
So many of our technological advances have relied on chance discoveries (e.g. penicillin), so we can't predict ahead of time what the end uses are going to be. This is especially the case with maths as it is so abstract, but where would we be without Boolean logic, public key cryptography, information theory etc.?
> > No easy answer is available, but it's worth asking.
> .. trying to dismiss the relevant discipline
Yeah. The question is worth asking yourself, your teachers, colleagues, and your friends.. but usually not worth asking (or answering!) strangers on the internet.
This type of “but what is it all for!” question has a huge asymmetry where it’s easy to ask and hard to answer, so it comes across as trolling. And if you’re talking to an adult individual who wants to argue not even about the priorities of basic research but the fundamental point of any of it, you can be pretty sure any conversation about the subject is pointless. People who want to take an anti intellectual stance aren’t waiting to hear a good argument before they change their point of view..
I have a PhD in the humanities, have taught at the college level, published work of interest to no one outside of a small group of specialists, and am a professional software engineer (concealing my field and CV because my story is unusual and I like privacy).
I wouldn't identify the tech world with intellectual work. I mean, setting up a lemonade stand is arguably intellectual work if we use the term in a broad enough sense.
By intellectual work, I mean (and most people here mean, I think) that which has no clear or intended use other than to explore, master, discover, learn -- pure research, study, teaching -- or isn't justified by real-world, practical "usefulness."
I tend to share just about all of Jaron Lanier's views on the tech world you're describing -- critical of free and open culture, unimpressed by and dismissive of LLMs and "AI," hostile to social media, and overwhelmingly disappointed.
Intellectual, academic, edifying work continues, entirely separately, to be worthwhile. I don't ask what number theory does for humanity, because I don't expect or need it to be useful in any popularly "practical" sense, any more than I expect a poem to be.
The resources that public basically "donates" to the science are limited and there's always the question of priorities. Not just money, but people themselves. This is very valid question. One could argue that there's potential in studying almost everything: prime numbers, deep space, exotic matter, crash particles to each other, discover language structure, develop new programming paradigms, model economic behaviour, model nuclear winter. But those who giving money, need to choose and allocate. And those who want to study prime numbers while receiving money from government, inevitable need to prove why this particular study has value.
That validation already happens. I worked as a scientist for a decade before I went to industry, getting grants was a constant struggle - agencies and politicians don't just give out money randomly. One can argue that these processes aren't perfect and that some money gets wasted, and that it makes for weird incentives - but that's a whole other debate.
And even that time pales before the mental toll. When I first went about writing a grant application, I thought it would be technically easy---I knew the bureaucratic details would be tedious, but otherwise straightforward. Oh boy, I think it was the most mentally taxing, soul crushing work-related task I ever attempted. I failed to submit, by the way. My research output also plummeted that year.
Classic "penny wise, pound fooling" crap. We could probably be a hundred or more years ahead of where we are if we stopped sabotaging ourselves like this...
This type of expenditure is a rounding error. We spend more dollars on charging phones that is then spent on flipping videos on TikTok. And TikTok only exists because of this type of curiosity based research. Not to mention antibiotics, refrigerators, cars, GPS, etc, etc, etc. The list is literally too long.
100% of all progress is science/technology/infrastructure. Without it we would be living in caves.
I am not a mathematician, but here is a motivation I read somewhere some years ago.
There are basically two ways to produce big numbers: add two small numbers, or multiply two small numbers. You can produce all positive integers by starting with zero and repeatedly adding one. You can almost do the same thing with multiplication too, except for these pesky primes, which are somehow atomic. Naturally then, one might ask: (a) How many primes are there? (b) How frequently do they occur? (c) Can we look at a number and determine whether it is a prime?
Now consider: Despite being among the oldest of the mathematical disciplines, there are still open problems about primes that can be explained to high school students.
Also, multiplication and addition are not simply operations that are of interest with respect to integers, but similar ideas apply to a bunch of other domains too. Polynomials, for example. So primality and primality-like ideas are like catnip for mathematicians.
Prime numbers are one of the distinguishing features of number theory, which means they also show up all over the place in anything related to discrete mathematics, which in turn means they show up all over the place in computer science.
Any maths-y field of study that has the concept of decomposition also has the concept of primality, usually in a way that relates to primality in the natural numbers. This means anything we learn about prime numbers also extends to those other fields of study.
Actually AES, unlike more ad-hoc block ciphers, is based on the theory of finite fields, including GF(8) that is used for its non-linear component.
The theory of finite fields is based on the theory of prime numbers, because the finite fields are sets of residues modulo a prime number or modulo a power of a prime number.
The theory of finite fields is involved in the design of many other block cipher functions or secure hash functions and also in the design of the most important message-authentication methods, like GCM, which is used to authenticate this HTML page on the HN site.
So prime numbers are important in most cryptographic applications, not only in asymmetric cryptography, like Diffie-Hellman or RSA. Prime numbers are used in one way or another for the transmission of any HTTPS data packet, not only in the key establishment phase of a TLS connection.
> The theory of finite fields is based on the theory of prime numbers, because the finite fields are sets of residues modulo a prime number or modulo a power of a prime number.
It is note quite correct that the finite field of order p^k is the set of residues modulo p^k when k > 1. Instead this field is obtained as a splitting field of the field of order p (which is the set of residues mod p).
Prime numbers and elliptic curves are much more connected than one might expect. Each elliptic curve generates a function similar to zeta function, and there is a version of a Riemann hypothesis for elliptic curves https://m.youtube.com/@PeakMathLandscape
ECC is pretty closely related to the study of prime numbers. It might not be built directly on the difficulty of factoring, but the theory of how to construct curves, how to use them, what's expected to be secure etc goes pretty deep.
With asymmetric crypto, you must also send by courier the root certificates (downloading Chrome or Firefox just fulfills the role of a courier that is not very trustworthy).
There exists absolutely no method of secure communication that does not depend on a piece of information that is transmitted separately, through a presumed trustworthy courier. All the existing methods only attempt to minimize the amount of information that must be sent through the secure courier.
With symmetric crypto without digital signatures but with some kind of Diffie-Hellman, you must send by courier only a pre-shared key that is used only for computing message-authentication codes that are used only in the couple of packets used in a key-exchange algorithm, when establishing a secure connection.
Using only symmetric crypto, secure communication can be performed in pretty much the same way as with asymmetric crypto, by generating fresh random session keys for every connection.
The only difference is that the key exchange packets are authenticated with a MAC using a pre-shared key, instead of being authenticated with digital signatures and a chain of certificates going to trusted root certificates.
If for some weird reason one would not want to use a Diffie-Hellman variant (e.g. with elliptic curves) to protect the session keys, one could use another pre-shared key only for encrypting the key-exchange packets.
There are only two advantages for asymmetric crypto, when used for secure communication connections.
The first is provided by Diffie-Hellman in any of its variants, which ensures perfect forward secrecy, i.e. even knowing all the content of some sessions, including their secret keys, that does not allow the decryption of other sessions. Without Diffie-Hellman, if the pre-shared encryption key that is used to protect the key exchange packets is captured, all recorded sessions could be decrypted. This can be only partially avoided by changing that key frequently, which would prevent the decryption of past sessions, but not the decryption of future sessions.
The second advantage is provided by the authentication of the key exchange with digital signatures instead of MACs based on pre-shared keys, which is the possibility of half authentication, where the server is authenticated based on the certificates provided by it, but the client is not authenticated, which is the most frequent kind of secure communication used on the Internet.
For communication inside a closed environment, i.e. a private network, using key exchange authentication based on pre-shared keys (but with elliptic-curve Diffie-Hellman for protecting the session keys) can be simpler, faster and more secure than using digital signatures and certificates.
While in the beginning I have used your metaphor about sending a pre-shared key or the root certificates by courier, the normal mode of transferring pre-shared authentication keys is by initial physical pairing (e.g. cable connection) of the devices that must be able of communicating securely between themselves.
For the same reason we send probes to outer space. We are curious about the universe. There's is something special about the prime numbers that we don't understand. Until we do, people will have the itch to keep looking.
Isn't basic research always like that? According to Wikipedia[1]: "aim of improving scientific theories for better understanding and prediction of natural or other phenomena". There is no implied success (it's only an "aim"), or utility, beyond that for science itself.
How much we want to support that (financially, socially etc) is a question a bit like, how much do we want to support children playing. Some disagree such should be supported at all, others are indifferent about such, yet others take pride in supporting or having supported such. The answer, to both of those questions, does have a large effect on how our societies look like. However, answering in the affirmative to support does not guarantee any positive progress. Likewise, answering in the negative, does not prevent progress, or basic research or children's play from happening.
Science often discovers and quantifies natural phenomena that are useful outside science. Whether pure math dealing with gazillion-digit-long primes can be of any use outside of satisfying curiosity is unclear.
Aren't some modern digital cryptography methods based on exactly that?
I do agree on the view that science often discovers useful phenomena. What I tried to stress was that basic research does not, by definition, aim for such utility. Especially with pure math, whether there are any applications for new, even groundbreaking discoveries, is often very unclear. And when there are, they might be only utilized decades or centuries after the initial discovery.
We’re really bad at handling large, complex structures.
Mathematics dealing with large primes and their complex structures is likely to find applications in other complex structures, eg in physics or computer science.
Mathematics is modern ontology: even when its self-investigation is not directly applicable, the vocabulary and semantics developed is often useful for articulating other truths.
"This interests me" is a hidden moral judgment. Morality is all about deciding what to do next. It's right to sometimes ask a question about aimlessness. Feeling interested motivates us to ignore that question, because it's already answered by the feeling. In the stirring of interest is concealed an intuitive master plan, which says "I don't know where this leads but it feels worthwhile".
Sometimes it's right to drag those intuitive feelings into the light and force them to explain themselves, and come up with some clue about in what way futzing around with (for instance) prime numbers might contribute to all the rest of the sprawling web of things we generally value in life. But enthusiasm is a precious and wholesome thing, so people rarely question it.
It sounds like finding explanations for your interests is useful to you, but I don’t think that generalizes.
Many people are completely comfortable pursuing interests without needing or wanting a logical framework to explain/justify.
I enjoy cooking, in the sense that I study and try to understand and improve at a technical level. I probably could come up with a rationale for why, but I suspect it would be post hoc reasoning, so why bother?
Prime numbers are used in cryptography. Public-private key cryptography is based on the fact that is hard to find the original two large prime numbers that were multiplied together from their result.
Example you see written 721.000.165.331 . It's hard to calculate that is the product of the two prime numbers 730487 * 987013
So if they can calculate bigger prime numbers with a new faster method we can know larger prime numbers and have safer cryptography.
(That's the simplified version that I remember as a software engineer)
> Prime numbers are used in cryptography. Public-private key cryptography is based on the fact that is hard to find the original two large prime numbers
That's true of one particular – albeit very popular – asymmetric cryptosystem, RSA. It isn't a property of asymmetric cryptography in general. There are other asymmetric encryption schemes which aren't based on the hardness of prime factorisation (e.g. DSA, elliptic curves, McEliece, NTRU)
For the researchers - because it is fun most likely, and they get paid for it. For society, I am glad we live in a society where some money is skimmed off for curiousity. But for a pratical reasons - this stuff (or some other bet) rears up as useful years down the line for something practical. Maybe some kind of cryptography or making quantum computing feasible... who knows! Imaginary numbers are pretty useful in science, and they probably seemed exotic when they first were talked about.
Prime numbers are like the atoms of numbers. They are indivisible and you can make all the other numbers from them. So, finding out more about primes translates into tighter constraints or proofs in many other (often not obviously related) theories.
And, it's beautiful to think about. Maybe huge practical innovations might result, or maybe only the pleasure of understanding something deep about numbers in the short time we exist.
I recall many years ago hearing that a mathematician had invented something and was very happy about the fact that it had absolutely no practical use. I may remember the details incorrectly but I believe it was one-way functions -- which are used all over the place now in computer security. Someone please correct me if I have the details wrong here.
It depends on which parts of math you feel are 'more real than others'.
To someone who only just has learned math, the increasing counting numbers (1, 2, 3, 466..) are all 'real things'.
But if you are very cynical about math, you might instead argue, that the only 'real' number we have, is the number '1'. All those others (2, 466..) are just "applications" of that '1'.
That is to say, we adopt the shorthand '466', because we don't want to write down almost 500 1's each time we reference it. (think of it like very bad roman numerals..)
In this perspective, where we ignore 'ordinary' numbers like 466,
you might argue that prime numbers are more real, because they really 'do' something (that is, construct composite numbers, like the example with 1 above.)
You could have a thought experiment of a world, in which we never developed arabic numerals or roman numerals, but instead did all our math directly on prime numbers. It would be a weird world, but still, you might imagine it :-)
As a former number theorist turned software engineer, I’ve noodled on connections between algebraic number theory and fairly concrete applications here:
Some good answers here, but I hate that this has been downvoted. It is a valid and reasonable question - we shouldn't be downvoting questions like some echo chamber Reddit thread.
Prime numbers are a core and mysterious numeric progression that has the unique property of being very taxing to determine the factors of any sufficiently large number. This is why they are used in cryptography. Investigations into the nature of primes has produced many mathematical tools and they have bridged many different areas of mathematics together.
But the simplest answer is that prime numbers are a tantalizing mystery that is easy for anyone to understand. The deeper you dig, the deeper the hole gets and it almost doesn't matter if a satisfying "answer" is ever found. Primes are a McGuffin that has led to countless discoveries.
Yes. You shouldn't be downvoted because it's a reasonable question and opens things up for fascinating exploration.
I guess there's many things interesting about it, but I see it like: prime numbers are the fundamental pattern of magnitudes, where the next prime is the first place that no multiples of any previous magnitude (prime or not) would ever land on.
In other words, if you took any previous magnitude (ie, any number less than that next prime), and copied it over edge to edge, the edge would never line up with that next prime.
Because counting and magnitude is so simple and fundamental to the space of concepts and even to reality, it's pretty fascinating that this extremely simple to describe pattern, is nevertheless hard to create a description for that's more concise than including all previous primes.
And I think people like finding that kind of 'shorter' description, as it indicates a deeper understanding, a new way of looking at reality that you didn't see before. And when we see that, it will probably be very useful to many other things.
It's fascinating to reflect on all that, and also on how this fundamental pattern of magnitudes, their 'self-similar but scaling' structure, also relates to the 'compressibility' of the number line and information theory.
That's what I think. I think everyone can find their own interest in there, there's probably a lot of ways to look at it. :)
Investigating primes is nearly as old as mathematics itself and its reasonable to assume other ideas where discovered in the hopes of applying them to various problems incorporating prime numbers.
From a practical, applied, perspective, “understanding” primes, that is making their “hidden” structure a known “truth”, would either confirm or deny the Riemann hypothesis wherein many other conjectures that assume the hypothesis to be true would also be “truely” known.
Or from TFA:
> …In the 19th century, research on these kinds of statements led to the development of much of modern number theory. In the 20th century, it helped inspire one of the most ambitious mathematical efforts to date, the Langlands program. And in the 21st, work on these sorts of primes has continued to yield new techniques and insights.
> …Their[the article’s sunbjects’] proof, which was posted online (opens a new tab) in October, doesn’t just sharpen mathematicians’ understanding of the primes. It also makes use of a set of tools from a very different area of mathematics, suggesting that those tools are far more powerful than mathematicians imagined, and potentially ripe for applications elsewhere.
The Riemann hypothesis makes me feel dumb - not just because I can’t solve it, no great shame in that - I genuinely get lost in amazement and wonderment by the mind that develops a function, graphs it, and gleams some insight into numbers.
Something about it I find humbling and makes me think about the archetype of mathematicians that lose their minds to numbers.
It is mesmerizing, but do note it was not a single mind that produced this insight. It was centuries of work. It involved, among many others:
1. Newton and the Bernoulli family developing the theory of infinite series and connecting them to discrete sequences,
2. Wallis developing the first notions of infinite products and demonstrating the first non-trivial convergence of such,
3. Euler solving the Basel problem and linking the zeta function to the prime numbers (giving a new proof of the infinitude of primes),
4. Gauss and Eisenstein further using Euler's ideas and their own unique algebraic insights to understand primes in arithmetic progressions, and finally
5. Riemann taking the zeta function, putting it in the complex plane, revealing the unifying theme connecting the previous discoveries and making his own fundamentally new discoveries with the explicit formula.
And of course the development only accelerated from that point on.
That’s exactly how I begin to put it into context and rationalize this kind of work - he was a mathematician so this the kind of thing he worked on, and he was also working on a body of maths and knowledge.
It’s much like physics and the great physics experiments throughout history for me, some of them I’d like to think I may have been able to develop, but others I just marvel at the ingeniousness of the experiments.
Realistically in a vacuum I doubt I’d have even identified/defined prime numbers.
Thank you for this. I've favorited this comment so that I can read on each of these to sate my curiosity. Now I'm off to search for accessible resources for these topics for those of us non-mathematicians ;-)
I think once you understand how to apply analytic continuation to the problem its relation to primes is much more apparent; even without a full understanding of the history.
Does it? For any number a, a^2 = a (mod 2), and primes greater than 2 are all odd, so if a prime p = a^2 + b^2, doesn't one of a or b have to be odd? Reducing mod 2, p = 1 (mod 2), a^2 + b^2 = 1 (mod 2), a + b = 1 (mod 2), so either a = 0 (mod 2) and b = 1 (mod 2) or vice-versa?
edit:
If Euler proved infinitely many such primes exist then "With this in hand, Green and Sawhney proved Friedlander and Iwaniec’s conjecture: There are infinitely many primes that can be written as p^2 + 4q^2." makes no sense without a further condition of p or q, let (in my notation) a=p be odd and b=2q be even.
Now having finished the article, I think this was just sloppy writing, and the actual accomplishment is related to the post-perhaps clause: one of p or q has to itself be a perfect square? Anyway, I have very little certainty about what was actually accomplished from reading this article.