This seems like an incredible magic trick, but once you look closely, it is a magic trick - any series of numbers can be expressed as a constant with this formula.
In a different world, we settled on a system of notation based on continued fractions rather than decimals for writing non-integers. In this world, nobody marvels at the irregularity of pi or e, the fact that they seem to go on for ever without a pattern - both numbers have elegant and regular representations as infinite sums of fractions.
In this world, we all found it slightly harder to make change at the grocery store, but perhaps we made up for that by producing a million high-school Ramanujans.
> In a different world, we settled on a system of notation based on continued
> fractions rather than decimals for writing non-integers. In this world, nobody
> marvels at the irregularity of pi or e, the fact that they seem to go on for
> ever without a pattern - both numbers have elegant and regular representations > as infinite sums of fractions.
There are some less widely-known topics in math that seem to make some of those that learn them want to "evangelize" about them and wish they had a more starring role. Continued fractions are one.
Now, don't get me wrong. Continued fractions are very cool, some of the associated results are very beautiful. More people should know about them. But they never will be a viable alternative to decimals. Computation is too hard with them for one.
Also, while e has a nice regular continued fraction expansion [1], that is not the case for pi [2]. There is no known formula for the terms, they are as irregular as the decimal digits. There are nice simple formulas for pi as infinite sums of fractions (simplest is probably [3]) but those are not continued fractions.
> But they never will be a viable alternative to decimals. Computation is too hard with them for one.
I think this is too narrow-minded.
You could make the same argument for ideograms vs alphabetic writing: one is clearly superior and you could never have a technological superpower that relies primarily on the other, but thanks to historical path dependency we actually have both.
I could imagine a world where the SI system never took off in engineering, due to stubborn people at inopportune moments. Engineers and physicists would still get their jobs done in imperial units, just like American carpenters do today.
Also I did elide the distinction between continued fractions and infinite sums of fractions, but again we can use our imagination and say that if continued fractions were commonplace, we'd all be a lot more familiar with the infinite sums too.
Good points! Showing what constraints have to be met and proving that the prime numbers meet the requirements does seem worthy of a paper. But once that paper is in, hopefully we don't need 300,000 papers, one for each other sequence in the OEIS, proving that it can or can't be expressed like this.
I think another interesting avenue for future research is proving whether measures applied to prime numbers (e.g. their statistical distribution) have an equivalence in the Buenos Aires Constant. Intuitively, it seems likely, but I don't have a proof, just a hunch.
I wish you were right but what we see as something simple is sometimes made simple by huge amount of work.
Look at how we learn maths and physics right now, it is much simpler than what they had back then. Centuries of human work / knowledge / capital were compressed into tiny powerful equations / definitions / demonstrations, empowering new humans to look at more complex stuff more easily. And there is a good opportunity right now to simplify this even more
s = 2.920050977316134
for _ in range(16):
i = int(s)
print(i)
s = i*(1 + s - i)
You might ask, "Isn't this just continued fractions?" But it's not, quite. The continued-fraction version of that script would be:
s = 2.313036736433583
for _ in range(16):
i = int(s)
print(i)
s = 1 / (s - i)
The latter (continued-fraction) version is good for only 8 prime terms before it breaks down at the limits of IEEE double precision. The former (TFA, Buenos-Aires-constant) version is good for 13 prime terms! That is, the Buenos Aires protocol is a noticeably better "compressor" than the continued-fraction protocol, given the fixed bit-budget of IEEE double-precision floating point.
If I'm reading it right, the explanation is that whereas the continued-fraction protocol is general enough to compress any positive-integer sequence, the Buenos Aires protocol can compress only any monotonically nondecreasing positive-integer sequence where each term is between 1 and 2 times the previous term. (The sequence of primes is such a sequence.) Greater flexibility means lower efficiency, and vice versa.
I often prefer code over math notation, but in this case, I find the math statement clearer - see Theorem 1 right at the start of this paper:
https://arxiv.org/pdf/2010.15882
The paper says the mathematician James Grime helped "tiny up the proofs" - some people may recognize him from Numberphile on Youtube.
IDK why, but this factoid blowing my mind right now.
One of the ways to broadcast evidence of Math + Computing power could be to calculate this constant to an absurd-enough precision and have SETI (or someone) send the digits out into space.
Edit: in the sense that, we know how to encode a large set of primes in a single transcendental number, and by the way, here it is.
I'm not sure why you ask? The above number already showed off the transition between 1 and 2 digit primes. And 0s inside the primes should not be a problem.
Hmmm, what's the algorithm to extract 101 and not 10101 or 1010103 as the next prime? I suppose you can remember how many digits were in the last prime and only allow the same or 1 more, which should be unambiguous. Though it relies on the unproven conjecture that there are no very long runs (an entire order of magnitude!) with no primes.
The point is, with a constructed transcendent number, you can represent any series at all, so long as you have "enough precision."
That last part means you can simply adjust your precision to your desired constraints. If you want to unambiguously represent the primes up through all the four-digit primes, simply change your padding accordingly: 0.000200030005....
Obviously the Buenos Aires Constant in the article is a neater representation, and relies on neater math. But GP's point was that you can stuff anything at all into a transcendent number.
Seems like the constant is another representation of the set of all prime numbers. I wonder of there is a branch of math that formalizes these different representations, there is the infinite series here that defines the constants, but how does that relate to the set of primes? And what are the other representations?
which has the curious property that as you substitute nonnegative integers for the variables, the positive values of the polynomial are exactly the set of prime numbers. (The polynomial also yields negative values.)
When put like this, it sounds like the polynomial must reveal something deep about the primes... but it's another cool magic trick. The MRDP theorem (famous for solving Hilbert's 10th problem negatively) implies that this kind of multivariate polynomial exists for exactly those sets of natural numbers that are computably enumerable, so the polynomials could be seen as a really esoteric programming language for set-enumeration algorithms.
To add to that, quoting from the paper [1] (published in 2020):
> ACKNOWLEDGMENTS:
> Dylan, Juli, Bruno and Massi are a group of 18/19-year-old friends from Buenos
Aires, Argentina. The original idea came to Juli while having a shower. Bruno calculated the prime-generating constant, first by brute-force and then by finding its formula. As the investigation continued, Juli and Bruno were joined by Massi and Dylan. Later, the team contacted mathematician James Grime who helped by tidying up some of the proofs and writing this note. So ∞ thanks to James!
Same, I actually took Calc II with Juli we started FCEN the same year but he obviously was very bright and haven't seen him since I assume he progressed very quickly.
I sometimes come across your comments here, but I rarely have a chance to reply. I almost write the year as 02020 above just in case you found it, now I'm sad I didn't.
The Python code seems to be a kind of arithmetic decoder with the assumption that each prime is less than twice the previous prime. That seems to be a special case of Bertrand's Postulate, but probably there's a lower bound one could use to get a more efficient encoding: https://en.wikipedia.org/wiki/Prime_gap#Upper_bounds
Of course, another way of making this a more efficient encoding would be to take account of the fact that primes tend to be odd: if p is an odd prime then the next prime is one of the (p - 1)/2 values p + 2, p + 4, ..., 2p - 1.
So you can write a program of 8+C bytes that can generate ~42 primes in linear time. Is there a theoretical minimum program size that can generate n primes in O(n) time?
In a different world, we settled on a system of notation based on continued fractions rather than decimals for writing non-integers. In this world, nobody marvels at the irregularity of pi or e, the fact that they seem to go on for ever without a pattern - both numbers have elegant and regular representations as infinite sums of fractions.
In this world, we all found it slightly harder to make change at the grocery store, but perhaps we made up for that by producing a million high-school Ramanujans.
reply