Obvious first example all primes above 2 are of the form 2x+1. An obvious repeating pattern of primes.
You can take this a small step further. All primes above 6 are of the form 6x+1 or 6x+5. Anything else is a multiple of 2 or 3. Above 6 only 1/3 of numbers are worthy of being considered prime. This is a slightly less obvious example.
A small step further - all primes above 30 are of the form 30x+1, 30x+7, 30x+11, 30x+13, 30x+17, 30x+19, 30x+23 or 30x+29. Anything else is a multiple of 2,3 or 5. So above 30 only 8/30 numbers are worthy of being considered prime. See how we've created a new pattern for the multiple of 2x3x5 to rule out a swath of prime candidates..
I could repeat this each prime found. eg. I could take the common multiple of 2,3,5,7 (210) and create a similar pattern for all numbers above 210 that rules out the repeated multiples of 2,3,5 and 7. (leaving us just 58/210 numbers worthy of being considered prime).
This is why you see peaks of primes at various repeating multiples. For every new prime found you can take the multiple of it and all previous primes. From that you can rule out primality for various offsets to any multiples of that number. So primes above certain numbers can only possibly exist in certain forms. Which is why you see primes at repeated patterns from each other - the primes can only exist in those forms.
I actually wrote and described it back in 2002 - https://forums.overclockers.com.au/threads/show-off.71630/#p...
I didn't think much of it back when i did it and i'm not a mathematician, just a hobbyist. I guess i should publish my prime hobbies more...
Some accessible examples would be Fermat's Last Theorem or the four color problem.
I'm assuming the researchers do not have the intent of confusing a crystal lattice structure with an actual mathematical lattice, because while they possibly may share similar influences in their models, one is math, the other is physics.
There is. Even though most physics research is extremely mathematical and abstract these days, it's still ostensibly grounded in empirical science. Math is not science, it just provides useful tools and insights for studying science. Unlike physics, the disciplinary imperative of math is not to provide us with truths about this world or any other world. Its imperative is to tell us what must follow as a consequence from a given set of assumptions and definitions. This is a very important philosophical dichotomy because it means that even the most lackadaisical, abstract problems in physics (such as moonshine in high energy physics) are still grounded in something "real." Math need not be grounded to anything real; it can be decoupled from what is real or even possible entirely.
> Keeping math "pure" really means keeping it "purely abstract", so it resists any kind of practical application.
I'm not one to be elitist with regards to pure versus abstract mathematics so I sympathize with your point here. That being said, purely abstract mathematics can be extremely useful even if it doesn't ultimately relate to the real world. Consider what G.H. Hardy wrote nearly a century ago in A Mathematician's Apology:
"...both Gauss and lesser mathematicians may be justified in rejoicing that there is [number theory] at any rate...whose very remoteness from ordinary human activities should keep it gentle and clean."
If only Hardy had lived long enough to see his pure and beautiful number theory sullied with the applications to error-correcting codes and cryptography.
Math is how physics is given practical application, if anything, that means science is more abstract than math is.
Physics should not influence math. Math has to be a consequence of it's own axioms, or have it's properties tested against it's own constructions.
Otherwise math falls apart. Then it's useless for physics.
Practical application is wonderful for math. But the direction math is crafted and interpreted matters a lot.
Could you explain what you mean by that?
The difference is physics has reality to test against - observations that can be measured. Math does not have this. Math's only metric against itself is itself.
Also worth noting that Freeman Dyson pointed out the link between prime numbers, or the Riemann zeta function, and quasicrystals, see p. 2-3 of http://www.ams.org/notices/200902/rtx090200212p.pdf
And here’s some marhoverflow discussions on Dyson’s ideas: https://mathoverflow.net/questions/133581/quasicrystals-and-...
First remember that 'primeness' is not really a property of a number, it's the absence of the property of being composite. "Can only be divided..." meaning "Can't be divided..." meaning "Not composite."
Second crystals are formed by repeating patterns. What happens if you compose a pattern from many repeating patterns and overlay them? There would be 'features' where the patterns don't overlap.
Do papers try to make things seem difficult, exciting and mysterious on purpose?
What do you mean by this? What do you believe a "property" is?
> Do papers try to make things seem difficult, exciting and mysterious on purpose?
Yes, being surprising/exciting is a criterion for being published.
A sort of example from the book that plays in the "backgroumd': https://en.wikipedia.org/wiki/Berry_paradox
RSA utilizes an extremely large semiprime (the product of two very large prime numbers) to generate a public/private key pair. This result does not meaningfully change anything related to the computational work required to factor semiprime numbers that have over 600 digits.
It's the non-factorability of primes that is important.
Primes by definition have two, and only two factors.
The difficulty of factoring a number which is the product of two large primes is the important bit.
Also Möbius becomes strange with infty factors.
Having no proper factors is the definition of an irreducible element.
The two definitions agree for the integers and nice algebraic structure (UFDs) but there are algebraic structures in which they are not the same which explains why there are two differenr definitions and names in abstract algebra
The only unknown is which two primes.
The authors don't provide a complexity analysis of their algorithm; in lieu of attempting to derive that analysis myself, I'm deeply skeptical that their algorithm can find arbitrarily large prime numbers in sub-exponential time; let alone the polynomial time needed for breaking classical public-key cryptography. The absence of such a complexity analysis is conspicuous because such an analysis would be groundbreaking.
However, I don't need to lean on the conspicuous absence of a proof of polynomial time complexity for their algorithm because their algorithm has explicit limitations. It's an approximation method (albeit with high accuracy) that can only find primes located in dyadic intervals having many Braggs peaks and a relatively small left endpoint. In the paper they represent this interval with (M, M + L); in general a dyadic interval is an interval of the form (k/2n, k+1/2n) which may be either open or closed.
In other words the algorithm has costly limitations, no proof of polynomial time complexity and the authors don't even study its behavior for a left endpoint greater than 10^6. RSA semiprime numbers have 600 digits - this result isn't even in the same galaxy. Considering the explicit limitations in the authors' paper and the lack of interesting analysis indicating otherwise, it's likely that the problem of finding a suitable dyadic interval containing a large semiprime number's factors will just reduce to the problem of finding its factors anyway.
For what it's worth, this is all distinct from a more conceptual problem that undermines any use this result could have for cryptanalysis. Take a look at the prime counting function. Per the prime counting function, there are approximately 2.53274 × 10^305 primes and 2.27654 × 10^613 primes less than 2^1024 and 2^2048, respectively. It's physically impossible to construct a lookup table consisting of all products of all pairs of primes in either set because there isn't enough matter in the universe to contain the information. But even if there was, you would never feasibly pick out the correct semiprime number corresponding to a given RSA private key by doing lookups in this way.
We already have very large lists of primes. It’s choosing which primes are used that is the expensive part of attacking crypto.
Additionally the OP simplified how cryptography works too much in saying you just need to find the products to decrypt.