> on January 18, 2018, I found a numerical sequence that generated the exact same pattern as Shaun’s pattern
Does this mean that we have a sort of bloom filter-esque test for primality? (ie, it will give you a guaranteed no in O(1) but you'll have to crunch numbers to get the yes?)
If so, are there implications for things that want to know "is it prime?" quickly? Crpytography comes to mind, for instance...
We already have quick algorithms that say "is it prime" with certainty. Reducing the required time from O(log^6(n)) to O(1) isn't particularly important from cryptographic point of view.
Note that log(n) is the length of the prime, so if you are using 2048 bit primes, log⁶(n) is quite large. I don't think anyone actually uses one of the general deterministic primality testing algorithms in cryptographic applications.
we do not have a fast, certain, algorithm for large numbers. The only thing we have for large numbers is “it’s probably a prime. if it’s of a special form, such as messenne, use this other algorithm that takes like 10 days to confirm whether it is actually a prime. If it’s not a number of a special form for which we have specialized algorithms for, which are still pretty slow for large numbers, well you’re SOL, and if you really want to know for certain if it’s a prime or not you better use one of the clever brute force algorithms that do things like only check odds and only check up until the square root of a probable prime, and are still slow as all get out”
So speed improvements are welcome for academic use. Although this doesn’t look like it’s a game changer.
If by "bloom filter-esque", you mean a probabilistic way of testing whether a number is prime or not, then the answer would yes for a majority of numbers.
How? Just run Fermat's little theorem on multiple values of "a" until you feel comfortable. [1]
Why a majority? There are certain exceptions such as the Carmichael numbers to which we need to use slower algorithms to verify the primality of.
Note that I'm assuming the number of calls to an algorithm is constant since it would be naive to discount the size of the number you're testing.
What are the implications of this result? Off the top of my head I can only think of one: the problem of deciding whether a number is prime or not is in the complexity class P (decidable in polynomial time). [2]
How does this affect cryptography? I would say not by that much.
Why? The "hardness" of some forms of cryptography (asymmetric) isn't from the ability to determine primality of a number. It's from the ability of determining the factors of a number. [3] That problem itself has greater implications for the field.
The question of whether it is possible to do so in polynomial time on a classical computer is actually an open question in CS right now.
Note the emphasis on classical! Amazingly, there exist a polynomial time algorithm to do so on a quantum computer called Shor's algorithm. [4]
Does this mean that we have a sort of bloom filter-esque test for primality? (ie, it will give you a guaranteed no in O(1) but you'll have to crunch numbers to get the yes?)
If so, are there implications for things that want to know "is it prime?" quickly? Crpytography comes to mind, for instance...